184 30 4MB
English Pages 445 [446] Year 2015
David J. Brown Access to Scientific Research
Global Studies in Libraries and Information
Edited by Ian M. Johnson Editorial Board Johannes Britz (South Africa/U.S.A) Barbara Ford (U.S.A.) Peter Lor (South Africa) Kay Raseroka (Botswana) Abdus Sattar Chaudry (Pakistan/Kuwait) Kerry Smith (Australia) Anna Maria Tammaro (Italy)
Volume 2
David J. Brown
Access to Scientific Research Challenges Facing Communications in STM
ISBN 978-3-11-037516-9 e-ISBN (PDF) 978-3-11-036999-1 e-ISBN (EPUB) 978-3-11-039639-3 ISSN 2195-0199 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2016 Walter de Gruyter GmbH, Berlin/Boston Typesetting: Lumina Datamatics Printing and binding: CPI books GmbH, Leck ♾ Printed on acid-free paper Printed in Germany www.degruyter.com
About IFLA www.ifla.org IFLA (The International Federation of Library Associations and Institutions) is the leading international body representing the interests of library and information services and their users. It is the global voice of the library and information profession. IFLA provides information specialists throughout the world with a forum for exchanging ideas and promoting international cooperation, research, and development in all fields of library activity and information service. IFLA is one of the means through which libraries, information centres, and information professionals worldwide can formulate their goals, exert their influence as a group, protect their interests, and find solutions to global problems. IFLA’s aims, objectives, and professional programme can only be fulfilled with the co-operation and active involvement of its members and affiliates. Currently, approximately 1,600 associations, institutions and individuals, from widely divergent cultural backgrounds, are working together to further the goals of the Federation and to promote librarianship on a global level. Through its formal membership, IFLA directly or indirectly represents some 500,000 library and information professionals worldwide. IFLA pursues its aims through a variety of channels, including the publication of a major journal, as well as guidelines, reports and monographs on a wide range of topics. IFLA organizes workshops and seminars around the world to enhance professional practice and increase awareness of the growing importance of libraries in the digital age. All this is done in collaboration with a number of other non-governmental organizations, funding bodies and international agencies such as UNESCO and WIPO. IFLANET, the Federation’s website, is a prime source of information about IFLA, its policies and activities: www.ifla.org. Library and information professionals gather annually at the IFLA World Library and Information Congress, held in August each year in cities around the world. IFLA was founded in Edinburgh, Scotland, in 1927 at an international conference of national library directors. IFLA was registered in the Netherlands in 1971. The Koninklijke Bibliotheek (Royal Library), the national library of the Netherlands, in The Hague, generously provides the facilities for our headquarters. Regional offices are located in Rio de Janeiro, Brazil; Pretoria, South Africa; and Singapore.
Foreword ― Why This Book? There are a number of related intentions for writing this book. These focus on addressing some growing problems and challenges facing the communication of research results in the U.K. as the sector moves from a print to a digital paradigm. The main issues addressed include: – Reviewing the way research is being conducted within a developed society in both applied and basic R&D sectors to see whether the current business model for sharing the fruits of research outputs satisfy individual requirements. The research process is changing from a singelton-dominated approach to one where ‘big science’ has implications on collaboration, interactivity and openess. These in turn have consequences on preferred formats for publication of research outputs. – Reaching an understanding on the changing nature of the scientific information economy–the impact of technology, politics and social change within society in general and how this impacts on the way scientific research results are shared in a digital world. – Assessing the changing balance between formal (journal) and informal (social media) taking place within which scientific, technical and medical (STM) publishing operates. – Defining, describing and analysing the communities of knowledge workers in the U.K., with particular emphasis on unaffiliated knowledge workers (UKW) as determined by the extent with which they have ease of access to scientific research publications. – Particular attention will be given to the challenges facing individual professionals. Researchers active in SMEs (small and medium sized enterprises) will also be analysed, as will the so-called ‘citizen scientists’. The ‘long tail’ of specialised information needs among individuals in the U.K. will be quantified where possible as well as demographic trends identified. – Assessing the impact of open access in leading to greater democratisation within the scientific information process. – Establishing the role of learned societies as coordinators of a new paradigm for scientific information dissemination. Also reviewing some of the alternative stakeholders which could take on a significant role in the dissemination of STM research output. – Reviewing other business models which are being offered as alternative solutions to the impasse created by the toll-based journal subscription system currently offered by leading commercial publishers and facing criticism from a growing community of industry pundits.
VIII Foreword ― Why This Book?
Scenarios which incorporate issues covering adaptation to a world of the Internet and online searching by researchers both within and outside academia will be reviewed in the summary.
Acknowledgements An understanding of the challenges facing researchers in accessing scientific research results was sought a few years ago by means of a research project which was awarded to CIBER (UCL) by the Research Information Network (RIN), Joint Information Services Committee (Jisc), and the Publishing Research Consortium (PRC). This project entitled ‘Access to Scientific Content: Gaps and Barriers’ (Rowlands & Nicholas, 2011) had as its main aim investigating issues which are closely aligned with those being addressed in this book. As a member of the CIBER research team, the author was able to investigate aspects of researchers’ information needs and what additional empirical research would be required to understand their present and future requirements. However, interest in this topic dates back much further, some seven to eight years, whilst the author was a member of the senior management executive team at the British Library, and awareness arose of the challenges which sectors of U.K. society face in gaining access to published research literature. Prior to that, the author spent ten years undertaking market research and business development at Elsevier in Amsterdam and subsequently Pergamon Press in Oxford. The author was also a director of a number of international intermediaries in the scientific communication process both in the U.K. and U.S.A. As director of the Ingenta Institute in the early 2000’s future developments in the information industry were investigated, and this interest was further advanced through being co-editor of a monthly newsletter entitled Scholarly Communications Report, published from 1998 to 2010. As such, the issues addressed in this book are viewed from the perspectives of the main stakeholders in the scientific information industry–publisher, librarian, intermediary, journalist, consultant and most recently as an academic researcher–and the views are not restricted to, or promoted by, any one group or sector. Many experts and individuals were contacted for their views. However, none bears responsibility for the content, interpretations, conclusions, or recommendations in the following analysis. It is largely a subjective assessment of the current state of scientific publishing overlain by as much new evidence as is available to support the key conclusion that fundamental change in the scholarly information industry is imminent, substantial and will affect a wide spectrum of research communication activities in the near future. One particular aspect explored in this book is the challenge which researchers outside academia and corporate research institutions face in getting easy access to published scholarly research. These researchers are defined as being ‘unaffiliated’. The size of this audience and the barriers they face in becoming
X Acknowledgements
active participants in the overall research process is an indictment of the current information industry and its structures. Particular gratitude is extended to all those individuals who have helped identify this as a crucial issue facing science. They came from all sectors of industry.
Contents List of Tables XIII List of Figures XV List of Acronyms and Abbreviations used in the text XVII Chapter 1 Background 1 Chapter 2 Definitions 7 Chapter 3 Aims, Objectives, and Methodology 17 Chapter 4 Setting the Scene 22 Chapter 5 Information Society 30 Chapter 6 Drivers for Change 49 Chapter 7 A Dysfunctional STM Scene? 59 Chapter 8 Comments on the Dysfunctionality of STM Publishing 80 Chapter 9 The Main Stakeholders 94 Chapter 10 Search and Discovery 107 Chapter 11 Impact of Google 111 Chapter 12 Psychological Issues 121 Chapter 13 Users of Research Outputs 132
XII Contents
Chapter 14 Underlying Sociological Developments 154 Chapter 15 Social Media and Social Networking 180 Chapter 16 Forms of Article Delivery 202 Chapter 17 Future Communication Trends 212 Chapter 18 Academic Knowledge Workers 238 Chapter 19 Unaffiliated Knowledge Workers 251 Chapter 20 The Professions 273 Chapter 21 Small and Medium Enterprises 280 Chapter 22 Citizen Scientists 288 Chapter 23 Learned Societies 297 Chapter 24 Business Models 315 Chapter 25 Open Access 340 Chapter 26 Political Initiatives 364 Chapter 27 Summary and Conclusions 379 Chapter 28 Research Questions Addressed 395 Bibliography 404 Index 417
List of Tables Table 7.1 Table 7.2 Table 7.3 Table 9.1 Table 13.1 Table 13.2 Table 13.3 Table 13.4 Table 13.5 Table 13.6 Table 13.7 Table 13.8 Table 13.9 Table 15.1 Table 15.2 Table 15.3 Table 15.4 Table 15.5 Table 15.6 Table 18.1 Table 18.2 Table 18.3 Table 18.4 Table 18.5 Table 19.1 Table 19.2 Table 19.3 Table 19.4 Table 19.5
Average growth in periodical prices in the UK by subject area 61 Revenues and profits from the major STM journal publishers (2013) 66 Information Industry Sectors 78 Significant STM publishers and their journals 97 Profiles of researchers (1991/2) by Faxon Institute 135 Profile of users in SuperJournal project 137 PEW categorisation of information users 137 Understanding Patterns of Use 140 Output and percentage of research papers published by U.K. researchers, 2010 146 Alternative breakdown of STM publications 146 Number of articles read per annum 148 Time spent reading an article 148 Proportion of articles read by respondents 148 The Generations – an overview 181 Use of Social Media 182 Demographics of Social Media use 183 Demographic summary 188 World Internet Usage and Population Statistics 190 Top Internet using countries (2012) 191 World researchers in 2002 and 2007 (estimates) 240 U.K. Applications for U.K. University Attendance 244 Students in U.K. universities by level of study 2005/06 to 2012/13 244 Breakdown of degrees by subject area in UK universities 245 Breakdown by level of academic attainment in UK universities, 2006/7 245 Gross estimates of the number of knowledge workers, U.K. and U.S.A. (2006/7) 254 Broad sector knowledge workers (1-digit SOC) by 1-digit SIC code 256 Numbers of R&D professionals in U.K. business sectors 257 U.K. Graduate employment 2002 to 2009 259 Employment Activity of U.K. Graduates 2008/09, 2009/10, 2011/12 260
XIV List of Tables
Table 19.6 Table 19.7 Table 19.8 Table 19.9 Table 19.10 Table 19.11 Table 19.12a Table 19.12b Table 19.13 Table 23.1 Table 23.2 Table 24.1 Table 24.2 Table 24.3
Leavers by activity 2012/13 (all survey Respondents) 260 U.K. output of graduates into knowledge-based occupations, 2008/09 261 Number of UK postgraduates by destination, 2008/9 262 Fulltime degree leavers by subject area (2012/13) 264 Destination of U.K. University leavers who obtained first degrees by subject area and activity 2011/12 264 U.K. R&D in Professional and Engineering Sectors 265 Integration of data sources 268 Integration of data 268 Key areas for academic intake into professions 269 U.K. Learned Societies and their membership numbers (U.K. membership only) 298 Publishing activities of ALPSP members (2013/4) 300 The main publishers/suppliers of online documents 319 Individual Article purchases from a publisher web site 320 ‘Turnaway’ of users on one publisher web site, J-J 2010 322
List of Figures Figure 1.1 Figure 2.1 Figure 4.1 Figure 4.2 Figure 4.3 Figure 4.4 Figure 5.1 Figure 5.2 Figure 5.3 Figure 5.4 Figure 6.1 Figure 6.2 Figure 7.1 Figure 7.2 Figure 7.3 Figure 7.4 Figure 7.5 Figure 7.5 Figure 7.6 Figure 12.1 Figure 12.2 Figure 12.3 Figure 14.1 Figure 14.2 Figure 15.1 Figure 15.2 Figure 15.3 Figure 15.4 Figure 16.1 Figure 17.1
Overview of Unaffiliated Knowledge Worker sectors 4 New Audiences for Scientific Information 13 Sociological Trends 23 Technological Trends 25 Economic/Commercial Trends 26 Political/Administrative Trends 28 From Data to Wisdom 32 The research process in outline 35 The STM Publishing Industry in Context 42 Example of ‘Twigging’ in Physics Sub-Discipline 46 Social media sites available on the Internet 52 Employment destinations of UK graduates, 2012/13 56 Total expenditure on Books and Periodicals in the UK, 2002–12 60 U.S. Academic R&D Expenditure and ARL Library Budgets, 1976–2003 in 1982 Constant Dollars 63 Library as % of total institutional spend in UK Higher Educational establishments, 1992–2005 63 Library Expenditures as a Percent of University Expenditures for 40 ARL Libraries, 1982–2008 64 Concentration within the STM journal publishing industry 67 Gaps and Barriers to STM access (2012) 69 Valley of Death 70 Trend in multi-author papers 129 Physical science versus Biomedicine with more than 100 authors per paper 130 International collaboration on science publishing 131 Three levels of interest in scientific publications 154 Theory of the ‘long tail’ 160 Impact of informal communication on traditional STM publishing market 183 Social Media sites 2012/13 184 Social Media sites by numbers of users 189 Global distribution of Internet use 190 The journal and the Scientific Method 204 The Gartner Hype Cycle 229
XVI List of Figures
Figure 17.2 Figure 18.1 Figure 18.2 Figure 18.3 Figure 19.1 Figure 25.1 Figure 25.2
Nautilus model of scientific communications 235 Average annual growth rates in number of researchers, by country/economy: 1996–2007 243 Applications for fulltime courses 244 Gaps and Barriers to STM access (2012) 247 Overview of main areas of knowledge workers 254 Growth in the number of institutional repositories (IRs) worldwide 352 Growth of Mandates 353
List of Acronyms and Abbreviations used in the Text AAP/PSP AAAS
AHRC A&I ALPSP
APC API ARL arXiv BIDS
BIS BL BRIC BYOD
CC BY/NC/ ND CERN
CHORUS
CIBER
American Association of Publishers, Professional/Scholarly Publishing Division. Part of American Association of Publishers (AAP) based in New York. American Association for the Advancement of Science. AAAS is the world’s largest general scientific membership organisation with a stated mission of “advancing science and serving society.” Arts and Humanities Research Council–a UK government research funding agency. Abstract and Indexing services. Bibliographic databases of secondary information; metadata pointing to the existence of fulltext. Association of Learned and Professional Society Publishers. Aims to serve, represent and strengthen the community of scholarly publishers. Has members from countries though has strong U.K. representation. Article Processing Charges. The price set for authors to have their articles published through the Gold Open Access system. Application programming interface. It is a set of routines, protocols, and tools to enable software applications to be built. Association of Research Libraries. Nonprofit organisation serving the largest research and university libraries in the U.S.A. and Canada A subject based repository of digital manuscripts mainly covering areas in physics but with extensions into mathematics and computer sciences. Bath Information and Data Services, from University of Bath. Formerly provider of bibliographic services to UKHE, since morphed into Ingenta and then Publishing Technology plc. U.K. government department responsible for Business, Innovation and Skills. The British Library Brazil, Russia, India and China. Rapidly developing global economies which are increasing their role in scientific publishing. Bring your own device. Refers to the policy of permitting employees to bring personally owned mobile devices (laptops, tablets, and smart phones) to their workplace Creative Commons licences for use of published material. BY = give appropriate credit for published work; NC = use can be made for non-commercial use only; ND = derivates of work not possible Conseil Européen pour la Recherche Nucléaire or European Organization for Nuclear Research. Research organisation that operates the largest particle physics laboratory in the world. Established in . A suite of services and best practices that provides a sustainable solution for agencies and publishers to deliver public access to published articles reporting on funded research in the United States. Powered by CrossRef’s service. CIBER Research Ltd, an independent research unit formerly the Centre for Information Behaviour and Evaluation in Research, part of City University and, later, University College London.
XVIII List of Acronyms and Abbreviations used in the Text
DIKW CUDOS DOI
DOAJ
DRM
ePub EPS ESRC ETOC FASTR
FOIA FRPAA
GNP/GDP HEFCE
HEI HEP HESA
html IFLA IT/ICT
ILL
Data-Information-Knowledge-Wisdom. Pyramid developed by Russell Ackoff (). Communication-Universalism-Disinterestedness-Organised Sceptism, a concept developed by Merton () Digital Object Identifier. The DOI system provides a technical infrastructure for the registration and use of persistent interoperable identifiers, called DOIs, for use on digital networks. Directory of Open Access Journals. DOAJ is an online directory that indexes and provides access to high quality, open access, peer-reviewed journals. Currently (May ) includes , Open Access journals. Operated by Lund University. Digital Right Management. The intent with DRM is to control executing, viewing, copying, printing, and altering of works or devices unless permission is given by the rights owner. A free and open e-book standard by the International Digital Publishing Forum (IDPF). Electronic Publishing Services Ltd, a consultancy company, now part of Outsell Inc. Economic and Social Research Council. National funding agency in the UK. Electronic Table of Contents, an online database of article listings produced by the British Library. Fair Access to Science and Technology Research (). US federal bill to allow articles reporting on publicly funded scientific research freely accessible online for anyone. (See FRPAA). Freedom of Information Act in various countries. ( in UK). Federal Research Public Access Act () required that U.S. government agencies with annual extramural research expenditures over $ million make manuscripts stemming from research funded by that agency publicly available via the Internet. Succeeded by Fair Access to Science and Technology Research Act. Gross national product or gross domestic product, alternative ways of assessing the size of a nation’s economy. Higher Education Funding Council for England. Promotes and funds teaching and research in higher education institutions in England. Scotland, Wales and Northern Ireland have their own equivalents. Higher Education Institutions. High energy physics, a research area with its own subject-based institutional repository (IR)–arXiv. Higher Education Statistics Agency. Publishes data on student enrolments in U.K. including Destinations of Leavers from Higher Education. See https:// www.hesa.ac.uk/stats-dlhe HyperText Markup Language. The standard markup language used to create web pages. International Federation of Library Associations. Information Technology or Information and Communications Technology. Research disciplines focusing on application of technology in the communications process in particular Inter Library Loans. A service which enables libraries to borrow books from each other to meet occasional or regular loans requirements from requesting libraries
List of Acronyms and Abbreviations used in the Text
IPA
IPR/IP IR
Jisc
JSTOR LANL Listserv
LISU
M&A MMR MOOCs NESLI2 NetGen
njps NSA NSF
OA OAI-PMH
OAPEN
OAIG
XIX
International Publishers Association. IPA is a federation of national publisher associations representing book and journal publishing. It is a non-profit and non- governmental organisation, founded in . Intellectual Property Rights. Ownership over goods/services produced. Institutional Repository. A repository of research output from an institution stored centrally and made freely available as part of Green Open Access movement. Joint Information Systems Committee. A United Kingdom non-departmental public registered charity which champions the use of digital technologies in UK education and research. A digital library of academic journals, books, and primary sources. Part of the Ithaka group in USA. Los Alamos National Laboratory. Los Alamos’s mission is to solve national security challenges through application of scientific excellence. Refers to a few early electronic mailing list software applications, allowing a sender to send one email to the list, and then transparently sending it on to the addresses of the subscribers to the list. One of a number of such services. Library and Information Statistics Unit. A research and information unit based in the in the Centre for Information Management, part of the School of Business and Economics at Loughborough University. Mergers and acquisitions. Growth policies pursued by companies to generate scale. Mixed methods research, part of methodology, a systematic way for giving a structured approach to research projects Massive Open Online Course aimed at unlimited participation and open access via the web on educational programmes. A Jisc Collections service which negotiates licensing contracts on behalf of university libraries in U.K. Term used to describe the generation which have grown up within the Internet and Web eras, and as such display a different search and retrieval activity from their predecessors Non journal publishing systems. Focuses on alternative models for disseminating research output. National Security Agency in the U.S.A. (See PRISM). National Science Foundation is an independent U.S. government agency responsible for promoting science and engineering which funds research programmes and education projects. See: www.nsf.gov/ Open Access. A business model which stresses ‘free at the point of use’ for scholarly publications. Includes Green, Gold and Hybrid initiatives. The OAI technical infrastructure, specified in the Open Archives Initiative–Protocol for Metadata Harvesting (OAI-PMH), defines a mechanism for data providers to expose their metadata. Open Access Publishing in European Networks. The OAPEN Foundation is a nonprofit foundation dedicated to Open Access publishing of academic books mainly in humanities and social sciences UK Open Access Information Group. Supports exchange among institutions of information about open access developments
XX List of Acronyms and Abbreviations used in the Text
OECD
Ofcom OINCS OJS ONS openDOAR
ORCID OSTP
PA
PLOS ppv PRC
PRISM
R&D RAE RCUK RLUK REF RIN ROAR RSS
Organisation for Economic Cooperation and Development. Founded in OECD aims to stimulate economic progress and world trade. It consists of member countries. Ofcom is the communications regulator in the U.K. which regulates TV and radio sectors, fixed line telecoms, mobiles, postal service, etc. ‘Out in the Cold’. Phrase used by British Library in referring to the UKWs in the s/early ’s. Open Journal System (OJS) is a journal management and publishing system that has been developed by the Public Knowledge Project (PKP) The Office for National Statistics (ONS) is the U.K.’s largest independent producer of official statistics. Includes data on U.K. knowledge workers Directory of Open Access Repositories, run from University of Nottingham and funded by a number of international bodies it lists open access repositories around the world. Open Researcher and Contributor ID is a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors. Office of Science and Technology Policy. Congress established OSTP in with a mandate to advise the President and Executive Office on the effects of science and technology on domestic and international affairs Publishers Association. Represents all types of publishers, with the Academic and Professional division providing a forum for higher education, scholarly and reference publishers Public Library of Science, a company publishing open access journals which arose out of concerns with the way scholarly publishing was developing. Pay-per-view. A business model which involves paying for online access to information, often focused on accessing articles on demand Publishing Research Consortium. A group of associations and publishers which supports research into global issues that impact scholarly communication. Funds occasional research projects. PRISM is a clandestine surveillance programme under which the United States National Security Agency (NSA) collects Internet communications of foreign nationals from at least nine major US Internet companies. It was launched in . Research and Development. For this book’s purposes, mainly in the natural sciences, biomedicine and engineering areas. Research Assessment Exercise, the former (prior to ) national system forevaluating individuals and institutions based on metrics. Research Councils UK (RCUK), the strategic partnership of the UK’s seven Research Councils (which includes AHRC, ESRC, BBSRC, etc) Association serving major Research Libraries in the U.K. The Research Excellence Framework (REF) is the current system for assessing the quality of research in UK higher education institutions Research Information Network, a community interest company (CIC), formerly funded by U.K. funding agencies. Hosted by University of Southampton, ROAR (Registry of Open Access Repositories) is a Jisc-funded project within e-Prints project. Rich Site Summary. Originally RDF Site Summary; often called Really Simple Syndication. Enables publishers to syndicate frequently updated publications.
List of Acronyms and Abbreviations used in the Text
RWA SCOAP3
SCURL
SDI
SDSS SEO SHARE SHEDL
SMEs
SPARC
STFC
STM STM
ToCs UCAS
UCL U.K. UKCMRI UKOFT
XXI
Research Works Act. US bill introduced in to prevent federally-funded agencies introducing open access mandates for its research projects. The Sponsoring Consortium for Open Access Publishing in Particle Physics. It has converted key High-Energy Physics journals to Open Access at no cost for authors. SCOAP centrally pays publishers. Scottish Confederation of University and Research Libraries. Principal association of University and research libraries in Scotland and has been working collaboratively and cross-sectorally for over years. Selective Dissemination of Information. SDI refers to tools used to keep a user informed of new resources on specified topics. It predates the world wide web and has largely been overtaken by alerts, RSS feeds, etc. Sloan Digital Sky Survey, an astronomy project which attracts input from amateur scientists in order to catalogue the universe. Search Engine Optimisation. Making sure that an information provider gets their publications listed as high as possible in a searcher’s online enquiry. Shared Access to Research Ecosystem, an Association Research Libraries initiative, competitive with CHORUS from the publishing sector. Scottish Higher Education Digital Library, through combined purchasing power, achieves a shared digital library in Scotland with easier access to online content for research. Small and Medium Enterprises. Companies with up to employees, many of whom may benefit from easy access to scientific information in support of their innovative activities. The Scholarly Publishing and Academic Resources Coalition, of the Association of Research Libraries in U.S.A.–information on alternative scholarly communication strategies for research libraries. See www.sparc.arl.org/ The Science and Technology Facilities Council is a UK government body that carries out civil research in science and engineering, and funds UK research in areas including particle physics, nuclear physics, space science and astronomy. Scientific, Technical, engineering and Medical research areas. Also abbreviated as STM or stem. International Association of STM Publishers with offices in Oxford and the Hague. Represents over members in countries who each year collectively publish nearly % of all journal articles. Table of Contents Universities Colleges and Admissions Service. Central charitable organisation through which applications are processed for entry to higher education. Includes information and services for prospective student University College London. United Kingdom The UK Centre for Medical Research and Innovation (UKCMRI) is re-named The Francis Crick Institute as from . UK government agency monitoring ‘fair trading’. To ensure that corporate abuse was not occurring, that consumer interests were protected. It closed April , with its responsibilities passing to agencies such as Competition and Markets Authority (CMA) and Financial Conduct Authority.
XXII List of Acronyms and Abbreviations used in the Text
UKOLN
UKWs UNIGE UNITAR U.S.A. VISTA VoR WHEEL XML
UKOLN is no longer UK core funded (as from July ) but continues a more limited role in research data management and public engagement activities for a number of agencies at Univ Bath. Unaffiliated Knowledge Workers. Those knowledge workers which would benefit from easy access to scientific information. University of Geneva. An autonomous body within the United Nations with a mandate to enhance the effectiveness of the UN through training and research. United States of America Next emerging markets after BRIC which includes Vietnam, Indonesia, South Africa, Turkey and Argentina. Version of Record. Final published paper available through the formal publication system. Wales Higher Education Electronic Library, a national licensing scheme for higher education. Extensible Markup Language (XML) is a markup language that defines a set of rules for encoding documents in a format which is both human-readable and machine readable.
Chapter 1 Background This book reports on the current state of scientific publishing. It is a study about reinvention of corporate roles in the global scholarly communication industry set against a background of a migration from a print to a digital world. It is a description of a transformative process still underway. It involves a clash between strong legacy in support of a traditional print-based approach, as against new innovative procedures which adopt emerging digital work practices, powerful communications and networking technologies. Specifically it reviews claims of dysfunctionality in the present publishing system and its inability to cater for researchers’ future information needs. The former offers the researcher values such as authority, quality and safety; the latter greater speed, more transparency, lower costs and greater reach into new areas of usage. This is a significant issue. Effectiveness in the communication of research results from scientific effort is at stake. Research is at the heart of a growing economy and inefficiencies or dysfunctionality in the dissemination of output comes at a huge social and economic cost. This book also reflects on the trend whereby new forms of scientific communication are being led by scientists who combine their knowledge of their scientific discipline with an appreciation of the specific needs of their peer group. New services are emerging at grass roots level. On the other hand it has been suggested that existing publishers and librarians provide steady improvements and local enhancements to the legacy research system. They are not organised or empowered to adopt risky and innovative processes. Nevertheless, what is developing, slowly, is the potential for a more democratic information system, based on rapidly developing communication and computer technologies, which supports interactivity, collaboration, openness, and sharing. It could enable a much wider audience to take part in the broadest aspects of science research and communication. This book begins with an analysis of how publishers and librarians currently face the changing business environment, and how they need to adapt in order to remain significant players on the information scene in the next two to five years. It remains to be seen whether they will become leaders in the drive towards an effective scholarly communication system or remain merely bit-part players in sustaining successful new ventures. Whether they will be sidelined completely as new ventures, supported directly by end users, emerge to create new competitive services that meet more timely needs, are also considered. There is a ‘valley of death’ syndrome which existing stakeholders need to cross, from the downward
2 Chapter 1
Background
slope of print to the upward climb of the digital. The valley floor itself involves cultural change and adoption of new business practices which publishers may find difficult to take on board (see chapter 7). Scientific communication is manifesting change in a number of ways: not only in the extent to which electronic publishing and network technology is being adopted; and not only in coping with the administrative changes which have come into play as focus is applied to ensuring research efficiencies are introduced; but equally significantly how the current players react to the change in the sociology of science and market behaviour. There are many unanswered questions about the interaction of all these issues and how answers to these questions will impact on the future model for scientific communication. There are many unknowns and few knowns on how print versus digital scenarios will be played out. There is an additional facet which this study adds to the above complexity. There are opportunities for private individuals to benefit from access to findings of scientific research, but because of current technical, social, administrative and business constraints, they are denied ease of access. The study explores whether external-sourced changes will enfranchise these individuals, or keep them as now on the outside of the scientific publishing process looking in. The ultimate aim would be to enfranchise the disenfranchised, affiliate the unaffiliated, and make science more democratic and less elitist. Interest in science output is no longer confined to the 7–12 million researchers worldwide in academia and corporate R&D. Instead an estimated 600–800 million knowledge workers are potentially interested and could become enfranchised albeit at lower levels of research intensity (see chapter nineteen). Who are these knowledge workers? There is no precise or accepted definition of them as a group or community largely because of the diversity in their backgrounds and different levels of academic achievement. Studies have cast the net wide to include any job or occupation which has a ‘cerebral’ content – approximately 50% of the labour force is included in such classifications (Drucker, 1973; Porat, 1977). For the purpose of this report, a narrower definition is applied which places emphasis on getting access to ‘high-level scientific’ research information which would be relevant to individuals in their professional or personal lives. The term ‘unaffiliated knowledge workers’ (or UKWs) will be used to describe members of these groups who find themselves outside the mainstream scientific publication system. Scientific in this context are findings which have traditionally been reported in research articles in the hard sciences (beta sciences) published in reputable peer-reviewed journals for a global research audience. Present and future research outputs can take a much broader range of formats, from datasets
Chapter 1
Background
3
to video clips, from preprints to e-prints, from moderated bulletin board items to blogs and wikis, from mash-ups to webinars. These could be the avenues along which UKWs could march to gain future access. The term ‘unaffiliated knowledge workers’ excludes those individuals who are part of academia, large corporate research centres or research institutes which are ‘affiliated’ in the sense that they do not have the same constraints in accessing scientific literature which face those outside such institutions. These large ‘affiliated’ research institutions are likely to have their own research libraries and professional information support staff. These library staff are tasked with providing access to relevant research literature for their affiliated patrons by purchasing the items (mainly journals, serials or books, printed or electronic) out of their annual collections budget. Those individuals who are outside such an institutional purchasing mechanism are in effect disenfranchised from the mainstream of published research output and are categorised as UKWs. The situation facing UKWs is an example of the ‘dysfunctionality’ which is targeted at scientific journal publishers in particular. The preferred business model supported by most journal publishers does not treat individuals outside the research library sector as a significant and commercially viable target audience. Hence, their needs are not taken into account and ever since the decline of the personal (journal) subscription (Tenopir & King, 2000) they have been excluded from easy access to relevant scientific output published in scientific journals. The scientific journal reigns supreme and is heavily protected despite a growing litany of complaints about its present and future role, and also criticism about the defensive postures adopted by some leading publishers in protecting the commercial interests of their stockholders. ‘Communication’ comes in many forms, of which journal publishing is one small part. Researchers are enveloped by new informatics services which increase the speed and relevance of information-seeking and reporting activities. Their reliance on traditional forms of publication declines in favour of becoming sophisticated ‘digital consumers’. ‘Big Science’ has become a growing part of the scientific research effort, and this has spawned new ways of cooperating and sharing. Collaboration and communication, global in extent, is being conducted through services such as Skype, Viber, LinkedIn, ResearchGate, FaceBook, blogs, webinars and listservs rather than through the pages of the scientific journal. Many researchers are lost and confused by such changes, particularly with respect to what they can and cannot do as openness clashes with ownership of intellectual property in the dissemination of formalised research output (Cox & Cox, 2008; Morris, 2009). The position which learned societies have in providing their professional members with information support services will also be addressed. Though this does not cover all the UKWs, the link between many learned societies, professionals, and
4 Chapter 1
Background
the latest applied research relevant to their areas is strong. Learned societies represent a core market sector of UKWs. Their specific information needs will be highlighted, and their differences from affiliated academic users will be described. The basic argument put forward in this book is that rapid change is underway, and the past and current system for scientific information dissemination may not necessarily be appropriate for the future where bigger and new mass audiences could be reached using different access procedures to research outputs. The diversity of such new audiences can be exemplified in Fig. 1.1. Not only are there many professions with their associated learned societies, there is also a large category of SMEs (small and medium enterprises) which are unable to get regular access to the range of scientific output that may help such entrepreneurs and innovators in setting their business strategies and operations. Other disenfranchised areas will be commented on as part of the ‘long tail’ of U.K. knowledge workers, and include the growing number of ‘citizen scientists’ who participate in many global research challenges (see chapter 22).
Alumni
City and Financial
Small and Professionals s i ssi ls Medium sized Enterprises Charities es
Volunta y Voluntary Workers
Agriculture
Entrepreneurs and Innovators
Research in R Academia Resear chers in Research Acade Resear earch h in Research mia Industry
Patients and Healthcare Workers
Policy makers and Research Funders
Citizen Dist scientists/ anc e Lear ners interested laypersons ns Dist anc e Lear Le ear e a ners ne ers Developing economies
Government officials
Retraining
Gen General Public
Fig. 1.1: Overview of Unaffiliated Knowledge Worker sectors
However, there is a constraint facing the objective set for this book. Though scientific research and publishing research outputs are important to society, there is a dearth of reliable, consistent and comprehensive data on which to build effective national or international information strategies and policies. Pockets of statistics
Chapter 1
Background
5
do exist in the U.K., from agencies such as Office of National Statistics (ONS), Higher Education Statistics Agency (HESA), UK Department of Business, Innovation and Skills, LISU and agencies such as Eurostat and National Science Foundation on a broader international setting. But these are not joined up, rarely share common definitions, and merely accentuate the poverty of quality macro level data available within this industry. It has led to controversy about the value contributed by various players in the overall STM publication process. But salvation may be in sight. This confluence of unmet information needs within a changing research environment has come at a time when ‘open access’ has emerged as a potentially viable business model. Open Access (OA) provides the means whereby those who have been excluded, the UKWs, could now be included in the formal research communication process as no paid subscription to access is required. A leading barrier created to access – financial controls – is removed. An international infrastructure is built around the alternative Open Access (OA) methods of Green, Gold and Hybrid (see chapter 25). Despite the attractions of OA, there is still some resistance to its universal adoption. This is currently being debated often with high emotion among and between the information industry’s stakeholders. New, untapped markets for research outputs have so far not been effectively targeted by OA as the latter’s supporters seek to use the new business model as ways to assist the ‘affiliated’ rather than reach out to unknown markets. Nor has OA been universally accepted by the researcher community itself who are still – as prospective authors – displaying a conservative approach in adopting this new business model. This book goes one step beyond OA by exploring the scale and different current information needs and habits of those who constitute the ‘long tail’ of scientific information. The key element of this study is that it highlights that a new approach which does not just involve an extension of traditional practices is needed. How researchers in all types of organisations are adapting their research activities in response to the ‘perfect storm’ (chapter 4) which is facing them in the work-place becomes important. Support for new market scenarios needs to be built on solid evidence and hard fact wherever possible, even though such evidence is currently fragmented, spasmodic and inconsistent. Nevertheless, for the future, reliable data underlying demographics and sociological trends are essential in understanding the changing information habits of the total research industry. One specific aim from this book is to provide tangible evidence on the nature of the UKW communities in the U.K., sufficient to enable new and viable business models to be developed, which would enfranchise such non-academia-based
6 Chapter 1
Background
professional knowledge workers. But more data and evidence needs to be created. As William Thomson, Lord Kelvin, wrote in 1883: I often say that when you can measure what you are speaking about and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.
This study attempts to fill in some of the gaps in knowledge through measuring and pulling together evidence from a variety of qualitative and quantitative sources. However, firstly, there are a number of terms which are used in the study which require definition and the approach taken outlined. This is the topic of the next chapter.
Chapter 2 Definitions Approach Several issues are involved in this analysis. – The book focuses on science, technology and medicine (stem, S&E or STM) rather than broader scholarship. These acronyms will be used interchangeably, with ‘STM’ being the most used. It is however recognised that there is a highly fragmented approach by disciplines in their respective approaches to digital information. A physicist is a different social animal, in information terms, from a humanist; a biologist from an econometrician. Even within scientific disciplines there are different sub-cultures each with differing approaches in adopting scientific publishing practices. – In terms of academic discipline, the study is about information science. However, it also straddles informatics, sociology of science with behavioural economics. Psychology, social psychology, and social networking all have their parts to play in coming to terms with the strength of emerging informal social media and other alternatives to online text as platforms for scientific communication. – The study is both commercially focused and strategic rather than purely academic in its approach. An assessment of market size, trends, and prevailing business models would be essential in understanding the interaction and significance of features involved in the change taking place in the publication of results from scientific research. Over time it is hoped, as suggested above, that gaps in data collection can be filled, and future iterations of this project will enable effective market data to be produced for analysis. It is also important in assessing the viability of new ways of communicating specialised information in a digital world – for determining viable approaches in reducing barriers to information access. Future infrastructural investments will be dictated by how confident the key players are that there is a socially acceptable and sustainable business underlying dissemination of research results. – This is primarily a U.K.-focused assessment even though the issues are global. Truly comparative data is hard to come by for the U.K. Not only is this the case for the U.K. but also for other information-rich countries. – It is an independent, impartial study, based on the experiences of the author who has been part of organisations which are involved in all stages of the
8 Chapter 2
–
Definitions
research cycle – from authorship (books and research articles), publishing (Elsevier), librarianship (the British Library), intermediaries (Faxon; Blackwells) to consultant (DJB Associates), and honorary academic postgraduate researcher (at University College London). Relying on any one of the existing functions or stakeholders to make relevant assessments would suffer from vested interests and legacies distorting the picture, however well intentioned. Impartiality is important at this juncture when concern is running high about activities of some players in the STM publishing business. The intended outcome is to provide confidence sufficient to enable scholarship and research communication to move forward to a sustainable and broader platform which meets the different demands of the stakeholders involved. An essential component of this book, therefore, will include speculation on how these various stakeholders display intent in adapting to the challenges, and therefore avoid disintermediation and oblivion. And just as important, to provide succour for a new set of players (UKWs) who could become active participants in the scientific information business of the future.
Definitions Science Research As indicated above, the book focuses on scientific information however it is defined. It carries the labels of STM (the acronym used throughout this book), stem, sci/tech/med and/or S/E (as used predominantly in the United States). They all make the distinction between the natural sciences – physical sciences, technology, engineering and biomedicine – and the ‘softer’ scholarship included within social sciences and humanities. Only the natural or physical sciences – STM – will be investigated in this study. There is no distinction made between applied and basic research – both are covered in this book as long as they relate to the so-called ‘harder’ or STM disciplines. Nevertheless, STM cannot be treated as a homogenous entity. Each subject within the broad disciplines within STM have their own characteristics, drives and culture. These define how they approach the challenges facing their adoption of information as a support asset.
Definitions
9
New Audiences Using the term ‘disenfranchised’ to highlight new audiences, however appropriate, is considered pejorative within some circles. ‘Unaffiliated’ is another less emotive term. They both convey that there are audiences ‘outside the academic garden walls,’ not beneficiaries of the closed academic/scientific information system whose funds to purchase required publications are mainly channelled through the research library. It is assumed that enabling a much wider audience to access published material – having similar access rights (administrative, technical and commercial) as those within academia – would improve the overall national knowledge base and make UKWs more productive and fulfilled. Society in general would be the beneficiary. There are several areas which will be highlighted as being ‘unaffiliated knowledge workers’. These are: – Professions, notably those with formal entrance standards and requirements to undertake ongoing technical evaluations, re-assessments, re-certification and updates. They are often supported by learned societies which provide administrative and information services for their members – SMEs, small and medium enterprises, again among those companies where there is a strong research or innovative component in order for them to survive, become viable and remain competitive in the private sector – Citizen scientists – the many individuals who have passed through a higher education system but wish for a variety of reasons to be kept informed on developments in their former discipline. It also includes those who have found a new interest in collaborative science in areas such as astronomy, environment, biosciences, biodiversity, etc. – Patients who wish to keep informed and updated on new procedures affecting medical complaints from which they or their contacts/relatives are suffering – Administrators, policy makers, consultants, advisors, charities, and other intermediaries who keep a watching brief on national scientific information policies – Remote and distant learners, often based in inaccessible locations – Researchers in third world countries – though technically outside the scope of this (U.K. focused) report, the attention being given to developing business models appropriate for poorer countries could have relevance for UKWs generally. See Fig. 1.1 for more areas of unaffiliated knowledge workers. Not all these UKW areas will be investigated in depth in the following book – but it is useful to demonstrate how broad is the potential market for access to research results could become. The key ones in this instance are the professions, SMEs, and citizen or amateur scientists.
10 Chapter 2
Definitions
Professions Professions are defined (according to Wilson, 1989) as organisations which include a ‘formal education’ requirement. In particular a professional is someone who receives important occupational rewards from a reference group whose membership is limited to people who have undergone similar specialised formal education/training and accept a group-defined code of proper conduct and practice. The key elements of a profession are: – It has developed rigorous standards – There are conditions of entry – The profession offers training and support – There are specific and unique rules of conduct – It is self or statutory regulated – The profession has accountability – There is a knowledge base which often has its roots in formal higher education – It has a distinctive and identifiable social mission. There are three key areas which relate professionals to the scientific publishing sector. A professional learns things in a way which differentiates it from most of the general population. Secondly, they pay as much if not more attention to the judgement of their peers as to the judgement of their customers when deciding on how to perform their tasks (Shirky, 2008). Finally, those professions having their technical roots in advanced science subjects and higher education make their members comparable to academic researchers, and as such they have the need for similar ease of access to relevant research results.
Small and Medium-sized Enterprises Defined as private or public organisations with less than 250 employees, SMEs constitute a large section of the industrial base of a developed/developing economy. They are often the source of new innovation which helps change the direction that society and the economy takes. They are frequently the pioneers, the innovators, the entrepreneurs. Many pioneers base their operations on the latest STM developments. Their inspiration often depends on research which takes place in academic, research institutes or large corporations. They also have assimilated the scientific ethic or culture of their discipline. In this respect the needs of SMEs are similar to members of professions.
Definitions
11
As with the professional members, access to the latest scientific and technical developments, as reported in the specialised media, is denied them for financial reasons. Also, they are constrained from openly revealing the results of their own research for competitive reasons. SMEs have a commercially-focused proprietary need for the benefit of their employer. Though 250 employees are used as the upper limit, a more realistic breakpoint would be small, energetic and entrepreneurial companies with fewer than 25 employees. Those companies having close to 250 are more likely to have resources enabling research to be conducted within their organisation with the full panoply of support services, such as in-house research libraries, with collections, and information support staff and services. This would make them similar to larger R&D corporations and academic institutions in their research profile. For the purposes of this book, a 25 limit on employees will be adopted as the pragmatic definition for an SME.
Citizen or Amateur Scientists As society becomes more educated, and as a greater portion of the population benefit from higher education, the scope for individuals pursuing interests which may or may not be directly relevant to their chosen career path, increases. Informal groups form to share their interests and make use of latest communication technologies. Social networking leads to social collaboration. The drive is to explore new frontiers, to expand the mind, to feel comfortable with the environment within which they exist – ‘to belong’. There are many examples of such social collaborations, from studying the distant universes in astronomy through to elimination of poverty and starvation – from protecting the environment to monitoring weather patterns – from studying biodiversity to understanding how to control diseases. Many of these require ease of access to research outputs in order to push back frontiers of knowledge. According to Weller (Weller, 2011), “… in a digital, networked, open world people become less defined by the institution to which they belong and more by the network and online identity they establish. The well-respected digital scholar may well be someone who has no institutional affiliation. The democratisation of the online space opens up scholarship to a wider group, just as it opens up subjects people can study beyond the curriculum defined by universities”.
Citizen scientists can become active lobbyists or passive hoarders of information. The spectrum of their requirements from an effective science information system is wide. The difficulty, again, is that the structure of the existing scientific communication process raises access barriers to individuals in these groups.
12 Chapter 2
Definitions
In the print based system, exclusion was less as they could often make use of a library that subscribed to relevant journals. Electronic licences restrict use to persons affiliated to the subscribing institution. It renders individual researchers unable to avail themselves of latest developments in their areas of interest. Yet they represent growing and in some instances powerful social groups. They offer awareness and understanding of scientific and technical issues relating to their specific areas of concern or interest, and ride on the back of the popularity of social networking for their inter-community interaction and communication.
Knowledge Workers The definition given for Knowledge Workers in Wikipedia is: “Knowledge workers in today’s workforce are individuals who are valued for their ability to act and communicate with knowledge within a specific subject area. They will often advance the overall understanding of that subject through focused analysis, design and/or development. They use research skills to define problems and to identify alternatives. Fuelled by their expertise and insight, they work to solve those problems, in an effort to influence company decisions, priorities and strategies. Knowledge workers may be found across a variety of information technology roles, but also among professionals such as teachers, librarians, lawyers, architects, physicians, nurses, engineers and scientists. As businesses increase their dependence on information technology, the number of fields in which knowledge workers must operate has expanded dramatically.
The Wikipedia definition therefore includes both scholars in academia as well as professionals in wider society. The ‘disintermediated’ or UKWs are a subset of a large global knowledge worker community. They rely the latest STM research output to remain relevant. We are witnessing the age of an extensive network of enlightened knowledge workers with a broad requirement for access to scholarship. In the United States, for example, science and engineers in their workforce varied from an estimated 5 million to 19 million (National Science Board, 2014) depending on how broadly the knowledge worker sector was defined. It is a diverse and diffuse sector.
Learned Societies Learned societies act as a paradigm for a community of like-minded individuals having a common mission which, in part, requires support for access to high level
Definitions
13
information. Learned societies have organisational structures, an informational culture, and social responsibilities. The learned society – a ‘community’ or association – is in keeping with the way people communicate through the Internet. As will be referred to later, many professional societies rely on a commercial approach to scientific publishing. This is so that they can maintain the rest of their not-for-profit activities through revenues generated by their society journal. Achieving an acceptable balance between provision of relevant, targeted services and yet also maintaining commercial sustainability is difficult to achieve. There is an issue of scale – the society needs to have a large enough publishing programme to ensure its viability. Too small, and it remains a cottage industry; too large and it undermines the other important social missions performed by the society. However, there are many learned societies in existence – as ‘twigging’ encourages the emergence of new sub-disciplines, so the practitioners in such new areas join together to create their own journals, culture, procedures and learned society. The CBD Directory of British Associations: Associations in Ireland, by CBD Research Ltd includes over 10,000 such institutions (though most do not have a strong scientific focus). (CBD Directory of British Associations: Associations in Ireland, 2009). Fig. 2.1 illustrates how the main audiences for STM information relate to one another.
Publishers Unaffiliated Knowledge Workers
General Public
Professions Academia
SMEs
Learned Societies
Research Libraries Citizen Scientists
Fig. 2.1: New Audiences for Scientific Information
‘Affiliated’ Knowledge Workers (academia)
14 Chapter 2
Definitions
Product This book primarily looks at scientific research published as articles in journals. There are other formats within the scientific communication process – notably conference proceedings, data and datasets, mash-ups involving integration of a number of media elements, supplementary material, grey literature or unrefereed reports, e-theses, patents, audio-visual presentations, conferences/meetings (actual and virtual), laboratory notes, etc. These will also be included under the STM umbrella. In addition social media is spawning new carriers of information. These range from blogs through to bulletin boards and listservs, from groups which create their own specialised online forums to other informal online services in which information and ideas are shared about a topic of common interest. Twitter, FaceBook, Figshare, LinkedIn, Mendeley, and ResearchGate are exemplars of the new product formats, platforms and services that the migration to digital information services has enabled. Teleconferencing and webinars are also increasingly taking central stage within the communication of science. All these product formats and carriers will be referred to as alternative, ‘informal’ scientific media, and they are in many instances more appropriate in meeting future information needs of researchers. This is where the clash happens between the traditional academic approaches and the new Internet advances, with the battleground being business models and ease of accessibility.
Research Journals Since the first scientific journals were launched, the means whereby the output of credible scientific research has been disseminated has been around the Journal. In England it started with Henry Oldenburg, who became the first secretary of the newly established Royal Society in London in 1660. He maintained an extensive network of scientific contacts through Europe and became the founding editor of the Philosophical Transactions of the Royal Society (1665). Oldenburg initially pleaded with known scientists to send in their manuscripts for consideration in the Society’s new journal (Hall, 2002: 159). Submitted manuscripts were then sent to experts in the field who would judge their quality before publication was agreed. This refereeing process sifted for quality. It was the beginning of both the modern scientific journal and the practice of peer review. Over a thousand journal titles were founded in the 18th century, and the number has since increased rapidly. In mid 2012 there were an estimated 28,100
Access
15
active, scholarly peer-reviewed journals published by between 5,000 and 10,000 journal publishers. (Ware, 2012; Ware & Mabe, 2012). During the past three and a half centuries the basic functions of these 28,100 journals have changed little. These functions include: – Initial registration of the research results, claiming precedence – Certification that they are correct, as arbitrated through review by peers prior to publication – Archiving of the results in a structured and traceable manner – Dissemination of the publication to all those entitled to receive it. Navigation has more recently been added to these functions as has online availability. Primary research articles are highly technical, representing the latest theoretical research and experimental results in the field of science covered by the journal. They are often claimed to be incomprehensible to anyone except to researchers in the field and postgraduates, though the extent of this is challenged as we enter an era in which educated knowledge workers (nurtured within the higher education system) grows rapidly. Tertiary reviews, written by specialists in the field, fill the vacuum between the high level researcher and the mass of knowledge workers. Reviews are a transformative device, translating research results in a style which can be understood by a less-specialised audience. However, in terms of sheer output, the primary research journal with its many research articles dwarfs tertiary reviews literature. Haemorrhaging or transmitting the latest specialised research output into the mainstream public information system is not a key feature of the current research journal. It is in many respects elitist, catering for the needs of an exclusive and privileged sector of society operating at the frontiers of knowledge.
Access Access to research output can be achieved through skimming a list of article titles, through browsing of content, to reading snippets or the full article, and in some instances interacting with the author through online services. All forms of access have their place in this study. There are also several barriers put in the way of researchers trying to gain access, and a leading barrier is commercial (journal pricing). The subscription model for journal access, and its derivative ‘the Big Deal’, has become the basic form for enabling researchers to access to the world’s research findings. This book explores other business models which are currently in place controlling
16 Chapter 2
Definitions
access to STM articles, as well as assessing emerging business models particularly those which may help bring UKWs more closely into the scientific research effort. It includes assessing the entitlement business models give to the user to read the article locally, through credit card payments for a pay-per-view option, to the latest iTunes equivalent whereby micropayment mechanisms are used to allow minimal access to articles or parts of an article online. In addition, the issue of ‘openness’ pervades an increasingly Internet focused society, impacting gradually also on the scientific communication process. The interaction between the Internet convention of perceived free access in the nonaffiliated sector (professions, SMEs) against the traditional convention for sifting and quality control (at a cost) in the scientific literature, is an area of growing uncertainty for business modelling.
Chapter 3 Aims, Objectives, and Methodology Aims This book tests the assumption held by many pundits (Rowlands & Nicholas, 2011) that the current scientific/technical publishing system creates obstacles and barriers preventing the universe of researchers in the U.K. having equal access to published research. It investigates whether a wider sector of society could benefit from such easier access. In so doing it explores how viable the current STM publishing system is and whether it is fit for purpose in a digital and Internet world. It assesses claims made by many pundits (see chapter 8) that STM is dysfunctional.
Objectives The key objectives of this book include: – Identification and evaluation of contextual issues, such as administrative, governmental, technical, commercial and social trends which create access barriers. – Explore what influences usage of research literature. – Identification of types of users who face inequitable barriers in accessing scientific publications – Establish what would be necessary to bring unaffiliated knowledge workers back into the system as equal participants. – Provide evidence to enable those agencies which produce and disseminate scientific communication to adjust their business practices to satisfy the needs of a greater number of people and thereby benefit from a robust and more healthy business environment.
Methodology The following analysis explains which methods were adopted, why and how.
18 Chapter 3
Aims, Objectives, and Methodology
Research Paradigm This book’s overall methodological approach adopts a post-positivist research paradigm (Pickard, 2013). Post-positivism recognises that detection of social reality is subject to the frailty of human nature (as distinct from positivism which is bound more to natural laws as in the natural sciences). There is greater informality in the post-positivist approach in keeping with the social focus of this book, without straying into the realms of conjecture and supposition.
Research Methodology Mixed methods research (MMR) was used, combining both qualitative and quantitative sources as circumstances warranted. This involved an iterative build up of knowledge about the issues addressed in this book. Neither qualitative nor quantitative approaches on their own would provide evidence necessary to assess challenges facing STM.
Research Methods Historical research of a wide range of document formats (desk research) was used to build up the evidence base of ideas, views, opinions and data. It relied on data which already exists rather than the creation of new raw data. Because the issues being dealt with are not rich in quantitative evidence – though the available data is included where possible – “exploration has become the main focus of this investigation, not testing or measuring” (Pickard, 2013).
Research Techniques An extensive literature study of formal refereed publications, informal social media, and the trade press was undertaken. This included online bibliographic database searches to identify problems facing the research community in gaining access to published research. This was to validate whether there is dysfunctionality within the sector. Monographs, commentaries and reference works were also analysed for their relevance. These works were from eminent authors who offer relevant
Methodology
19
views and include works by Neilsen, Monbiot, Gowers, Allington, Murray Rust, Weinberger, Tapscott, Shirky among others (see chapter 8). Relevant hard evidence or quantitative data was extracted from a number of statistical collections, including those from the Higher Education Statistics Agency (HESA), the Office of National Statistics (ONS), the Dept for Business, Innovation and Skills (UKDBIS), from US National Science Board (NSF) and the European Commission’s statistical services. Critical analysis has been undertaken to weave arguments into a conceptual framework supporting the contention that there is a problem facing the dissemination of scientific research particularly among those knowledge workers not affiliated with large research institutions.
Research Instruments Semi-structured iterative interviews were undertaken with some 20–30 experts representing different aspects of the research life cycle, during which concepts and ideas were tested. These included gurus or mavens in areas such as consultancy, publishing, librarianship and academia each of whom had something unique to contribute. It also included interviews with representatives from U.K. learned societies. Responses were also derived from questionnaires returned from a RINfunded study into researcher’s information activities. These primarily investigated the ‘gaps and barriers’ facing scholars mainly in academia, and also the needs of those outside academia (Rowlands, 2011). A set of questions was used to provide direction for this study, based on the above approach.
Questions The aims of this investigation can be restated as twofold. Firstly, in providing a picture of the challenges facing unaffiliated knowledge workers (UKW) in the U.K. and their apparent inability to gain easy access to published scientific information. Secondly, to do this it has been necessary to take a step back and look at overall trends facing the information industry and in particular to assess whether it is unfit for purpose. In fact whether there is a privileged group within society, an elitist group, which benefit from the current system, at the expense of greater democracy in the dissemination of research results among a wider audience.
20 Chapter 3
Aims, Objectives, and Methodology
The heart of this project is to come up with recommendations for those agencies which produce and disseminate scientific information in the digital age to adjust their business practices to satisfy the needs of a greater number of people and thereby benefit from a robust and healthier business environment. This dual aim has led answers being sought to the following questions. Industry Structure: 1. What are the overall macro-level trends which are impacting on scientific communication? 2. What is the current structure of the information industry in the U.K., specifically the research sector requiring access to scientific information. 3. What are the main external drivers for change? Industry Concerns: 4. How robust is the current scientific publishing industry in the U.K? Will it adapt to address information needs of a latent knowledge worker sector? 5. What are the opinions of leading industry observers concerning the main sci/tech/med publishing stakeholders? 6. What are the main information usage patterns found among researchers? Social and technical trends: 7. How significant are underlying sociological trends in changing research activity? 8. How will researchers interact with social media in future in getting access to required scientific research results? 9. What media – other than research journals – are used to keep up-to-date (such as blogs, datasets, crowd sourcing, etc) UKWs: 10. Who are those not benefiting from the current system of scientific publishing? What are the main sectors within unaffiliated knowledge workers? 11. What problems do each of these knowledge worker sectors have in getting access to formal published research results? 12. What needs to be done to enfranchise UKWs in the U.K. in future? Supporting Agencies: 13. What role will learned societies have in supporting access? 14. How will open access facilitate greater democratisation within scientific information?
Methodology
21
15. What is the impact on stakeholders in providing UKW researchers’ information needs? The structure of the following report will look at each of the above in turn. A set of conclusions and recommendations will be provided at the end of the book.
Chapter 4 Setting the Scene Introduction Winds of change are blowing throughout scholarly communications – it has culminated in a so-called ‘perfect storm’. This arises from the confluence of several largely unrelated trends. These can be categorised as sociological, technical, political/administrative, economic/commercial, openness and overall trends impacting on the research process.
Sociological Trends Higher Education: These include growth in the proportion and number of an educated population of ‘knowledge workers’ within society as governments seek to increase attendance rates at higher education institutions (HEIs). This leads to a more informed and ‘scientific aware’ society. The majority of those benefitting from higher education are destined for careers outside academia. It becomes a stimulus for the wider dissemination and understanding of scientific research results beyond traditional core universities. Generational differences: There is the rise of the so-called ‘Net Geners’ (born since the arrival of the Internet). They lack the same constraints or habits which were set by a print-on-paper paradigm. They are ‘digital consumers’, experienced in using IT services and able to multitask. Their adoption of digital information sources challenges traditional reliance on the printed book and journal as the primary means of gaining access to scientific information. The Shallows: As described by Carr (2008; 2010), the impact of demographic changes taking place in society and also new technical options for communication amongst researchers, is leading to changes in the way people absorb information. Using neurological studies to show that the brain is an adaptive muscle, and can be stimulated by new situations, Carr and others suggest that reliance on reading research articles loses out in favour of skimming through online summaries and abstracts in other media. This impacts negatively on branded journals. Social networking: Social media supporting social networking are also part of the community’s infrastructure as researchers adopt procedures and technologies well established within the consumer sector and used by them in their non-research activity. These range from Skype through to moderated bulletin boards and come in various other rapidly evolving, socially determined, forms.
Sociological Trends
23
Collaboration: Much research involves teams of specialists acting in close cooperation. Not only within universities – there are a growing number of public and private research initiatives across many institutional types both globally and within the U.K. The era of the single researcher breaking new research ground is fading and moving towards ‘Big Science’ (Price, 1963) and ‘collaboratories’. Their coordination is guided by an ‘invisible hand’ towards joint discovery and innovation. This has generated networks of collaborators worldwide with a resulting demand for effective real-time and fast communication support services. Workflow processes: Publication of research results is a defining stage in the research process – prior to that there are a number of phases which the project moves through each having need for access to certain types of information. From initially researching the idea, through finding out about competitive studies and teams, to seeking collaborators, to sharing research data and in some cases investigating commercial returns – each phase places a demand for access on different types of data and publications. The variables which have impact on the sociological trends, mainly derived from demographic, research trends, neurological developments and social can be illustrated in the following diagram: Research Trends
Demography
Educational attainment
Workflow processes
Generational differences
‘Big Science’
University challenges
Datasets, logs
Researcher Behaviour Information excess
Social media
‘Google makes stupid’
Collaboration, interactivity
Mapping the Brain
Communication v certification
Neurological changes
Fig. 4.1: Sociological Trends
Social interaction
24 Chapter 4
Setting the Scene
Datasets: Resulting from the changing nature of the research process, there is a growing demand for access to raw data so that experiments can be replicated in different environments and for different purposes (Economist, 2013). The challenge to identifying relevant data sources and getting access to them can be considerable, but necessary to avoid duplication in research efforts. Services such as DataCite are addressing this, but the sheer diversity and size of some big science projects makes coordination in this area a significant global problem. It is changing the balance against text-based only articles and supporting the rise of varied multimedia sources of content for research purposes.
Technological trends Technical Platforms: These include a variety of initiatives, including ongoing refinements to powerful global search engines and developments in electronic publishing generally. Gateway services such as Google Scholar, PubMed, Scirus, and Web of Science have been particularly important in driving awareness of relevant publications. Online bibliographic databases provide additional secondary sources. Such platforms and services enable technical links to remote information sources to be achieved quicker and easier. Greater availability of devices: Hardware such as laptops, netbooks, eBook readers, smartphones, tablets, etc. have widened the potential audience for digital information. There is a significant technical infrastructure in place, able and ready to access available research output in whatever format. As the costs of these devices continue to tumble so new audiences are able to benefit from their use for scientific information support. Electronic publishing: Changes in printing and digital publication is affecting the technological profile of the publishing sector Communication changes:. There is an emphasis on speed of information transfer, with the traditional journal being seen as dated and with new interactive communication channels being adopted. Skype, LinkedIn, moderated bulletin boards, webinars and blogs figure as services in this area. These are speedier and more interactive communication services – compared with the postal service – available at low cost and are changing the ways individuals share ideas and research data. Tradition versus Originality: Ensuring quality/correctness and also maintaining scientific values or ethics both enforce conformity and orthodoxy. However, values attached to originality encourage dissent. As such, tensions facing
Economic/Commercial trends
25
researchers in balancing these drives become more intense as improvements in technology tilt the balance in favour of innovation and originality. The integration of the various technological enhancements can be seen in the following diagram:
Access devices
Internet
Web services
PCs and laptops
Listservs
Mobile/smart phones
Skype and webinars
Datasets
Researcher Behaviour Grass roots driven
Google
Publisher driven
Discipline focused search
Library/other driven
Open systems and links
New info services
Search services
Fig. 4.2: Technological Trends
Economic/Commercial trends Global economic trends: China’s importance continues to grow, economically and scientifically. New opportunities arise as increased urbanisation in the third world takes hold, which in turn will unleash new consumer power and give rise to middle classes in the developing world, not least in Asia, South America and Africa (BRIC and VISTA). These trends change the agenda of scientific research
26 Chapter 4
Setting the Scene
and reduce traditional dominance of western nations as the sole source for scientific endeavour. Budget constraints: Both research and academic teaching functions are competing for scarce public funds and resources. This is particularly the case within research libraries which for long have struggled to balance the growth in research output with their diminishing collection budget allocations. Openness: Open innovation is leading to growing support for different business models to disseminate research findings. In particular there has been a rise in gold, green and hybrid Open Access publishing options. Commercial exploitation: Openness has coincided with a reaction among many eminent scientists against the business practices of traditional (high price) commercial journal publishers. Librarians have also been vocal in criticising commercial journal pricing policies. Economic and commercial factors can be illustrated as follows: Costs of Publishing
Openness and Free ‘Frustration Gap’
Refereeing
OA developments Green/Gold
Electronic publishing
Mandates
Datasets
Researcher Behaviour
Grass roots driven
Google
Publisher driven
Discipline focused search
Library/other driven
New competitors
Pricing models
Fig. 4.3: Economic/Commercial Trends
Industry structure
Political/Administrative trends
27
Alternative financial models: In addition to Open Access, there are experiments with micropayments and pay-per-view for delivery of specific information items. Entrepreneurs from outside the publishing industry have often led in this area – for example, DeepDyve, a Silicon Valley start-up (see under Business Models in chapter 24).
Political/Administrative trends National and centralised policy directives: Long term strategies relating to the output of research results are being set by the government agencies such as the Department of Business, Innovation and Skills (UKDBIS) in the U.K. These policies often have economic austerity programmes as their primary driver, and not other social considerations such as extending the reach of research output, raising awareness of science within society, etc. Tradition versus Authority: The authority of science is essentially in its tradition and universal acceptance of quality judgements; but this tradition upholds an authority which also needs to foster originality (see above). These two processes are difficult for funding agencies to balance. The question is whether a modern society can be bound by tradition – whether things need to change to free science from its ‘self-imposed shackles’ – and what role funding agencies have in creating the requisite balance. Administrative changes: Related to the above, both public and private funding agencies in the sciences are setting new demands and specifications for improving the impact and effectiveness of those research outputs which they fund. This includes within the U.K., HEFCE and its RAE and REF assessments, the Wellcome Foundation and the UK Research Councils. University viability: Higher education institutes are seeking a new mission for themselves as they face challenges from digitally-delivered distance online learning and massive open online courses (MOOCs). Those U.K. universities without a strong global brand will strive to remain viable in the face of higher student attendance fees, and to avoid being overtaken by international competition from centres of excellence in the USA, China, etc. New business ventures: New players have entered the scientific information market, often started by researchers who have a clear understanding of the information needs of researchers in their own areas. This has led to operations such as Mendeley, Utopia, ReadCube, and CiteULike filling a void which did not fit in with the scheme of traditional, editorially-led, publishing activities.
28 Chapter 4
Setting the Scene
Entrepreneurship is becoming democratised, which means no current stakeholder is safe. The barriers of entry to scientific publishing are dropping as costs tumble and interactivity and social networking grows. Content availability: Vast digital archives and information are available and being created as part an ‘information explosion’. Not only is this occurring with text but increasingly with large datasets and multimedia (audio/visual). As early as in Toffler’s ‘Information Shock’ it was claimed “[Information overload] was a psychological syndrome experienced by individuals, rendering them confused, irrational and demotivated” (Toffler, 1970). Advances in available content sources since then has worsened the situation and has led to various coping strategies being adopted by individual researchers. The various political and administrative trends are summarised in the following diagram
Science Policy
Research Policy
Universities and Funding agencies
Research Assessments
National R&D support
Twigging phenomena
‘Openness’, Interactivity, Collaboration
Disciplinary differences
Researcher Behaviour
Alternative Business models
Publishers’ size and ownership
Document delivery and PPV
Learned Societies
APCs versus Subscriptions
Library issues
Pricing models
Fig. 4.4: Political/Administrative Trends
Industry structure
Political/Administrative trends
29
Enhancing the above is the ‘network and multiplier effect’. This effect increases exponentially the rate at which change is being brought about through their combined interaction. It emphasises that there is a new scientific communication process emerging; one which is more in tune with the requirements of an expanding digitally and Internet competent society. It raises questions about the viability of the traditional ways of scientific communication, based on a model from the era of printed communication. It also raises questions about the fairness and equity of a system which might currently be labelled as ‘elitist’. Given the new technologies and procedures which are available scientific information dissemination could become more ‘democratic’ and ‘open’. This study focuses therefore on there being a potential for change towards greater democratisation of research information. As a by-product, it envisages that new audiences, such as knowledge workers in general, would be brought into the scientific communication process. This is the main objective of this book: to highlight that there are alternatives to the current scientific publishing system which derive from external factors (such as social and/or technological progress) and which will result in a wider spectrum of users being brought within the scientific communication process. Before that can be achieved, there are still several technical, legal, social and commercial (business model) barriers which need to be overcome. However history provides the experience on which future systems can be built. The next chapter looks at how the information economy has emerged and what form it has taken. This will be followed by looking at how robust players are in this volatile industry.
Chapter 5 Information Society Introduction ‘Knowledge’ as an asset within society first became recognised as a social force within the past century. It heralded in the so-called Information or Knowledge Economy. It is still evolving, becoming more dominant, and yet changing direction as other forces (see above) exert their external influences.
Information Society The concept of the ‘Information Society’ is fairly recent. In the eighteenth century, the U.K. was still largely an agrarian economy, moving on to become an industrial economy in the nineteenth century, and with a strong service industry sector (such as finance, entertainment, etc) emerging in the early twentieth century. It was not until the middle of the twentieth century that the emergence of the Information and Knowledge economies became apparent. Only then did the role of knowledge workers become a topic for discussion. The business consultant Peter Drucker in his book “The Landmarks of Tomorrow” (Drucker, 1959) is attributed with coining the term ‘knowledge workers.’ He was followed by Fritz Machlup who provided a systematic analysis of knowledge within the U.S. economy in his book “The Production and Distribution of Knowledge in the United States” (Machlup, 1962). His analysis was followed by a similar but more extensive quantitative study by Marc Porat in “The Information Economy: Definition and Measurement” (Porat, 1977). There was general agreement that the share of the economy which related to information and knowledge work was almost 50% of gross domestic product (gdp) by the 1970’s, and similarly the proportion of the workforce which knowledge workers represented was also about 50%. Information within society had become, and was recognised as such, a valuable social asset.
Information Economy One of the more rigorous treatments of the size of the knowledge worker sector was undertaken by Porat (Porat, 1977). He collected data about information
Information Economy
31
activity in the U.S. economy, and promoted the idea that there should be central collection of such data by a federal agency within the Executive Office. He proposed a conceptual framework for defining information activities in an advanced economy and how to quantify them. Porat was not alone in trying to systematise the information economy at large. He was preceded by Fritz Machlup (Machlup, 1962) and Daniel Bell (Bell, 1973). But Porat’s statistical research, his input/output analyses, was nevertheless ground breaking. The U.S. economy was divided by Porat into six sectors: three information sectors; two non-information sectors, and a household sector. Three information sectors produce and distribute all the information goods and services required by the economy. The two non-information sectors supply all the physical or material goods and services whose value or use do not primarily involve information. The household sector supplies labour services and consumes final goods. Porat identified 26 major information industries that constitute the primary information sector, and also pointed to the contribution which came from secondary information sectors which accounted for 82 non-information industries. Conceptually, he claimed that ‘information’ cannot be condensed into one sector – such as mining – but rather the production, processing and distribution of information goods and services should be thought of as an ‘activity’ across all industries. There is also the other distinction which information has over others – it is a commodity which does not depreciate from use; in fact, its value is enhanced with use in some instances. Porat concluded that 25.1% of the gross national product (gnp) in the U.S.A. in 1967 was bound up with the primary information economy, which is where information is exchanged as a commodity. This excluded the secondary information economy, which includes all the information activities produced for internal consumption by government and other firms, where it is embedded in some other good or service and is not explicitly exchanged. This amounted to an additional 21% of the U.S. economy. According to Porat, nearly half the labour force held some sort of information-related job. By 1967 the information sector became dominant, rising from a low of 15% of the workforce in 1910 to over 53% of all labour in 1967. Porat measured the primary and secondary information economies by taking each candidate industry and separated information from non-information activity. In some instances this was easy – computers and telecommunications, for example. In other instances, it involved analysing the work flow of industries such as finance and real estate in great detail. The allocation of time on information
32 Chapter 5
Information Society
activity for each profession or trade will differ, but even in 1974 physicians on average spent 68% of their time on information-related activities. Machlup produced a similar treatment of the U.S. information economy (Machlup, 1962). This provided the empirical data for subsequent work by Daniel Bell (Bell, 1973) among others. Machlup’s basic accounting scheme began with five major classes of knowledge production, processing, and distribution, and 30 industries that were classified into (i) education, (ii) research and development, (iii) media of communication, (iv) information machines, and (v) information services. Machlup’s estimate of total knowledge production of $133,211 billion (1958) compares with Porat’s $71,855 billion for the same year. The difference is accounted for in the latter’s exclusion of secondary information services. The point is that ‘information’ in all its forms and guises has become popular, and its measurement has shown that it is a significant, social asset. It therefore behoves society to ensure that barriers and obstacles which are in place affecting its effective dissemination and use should be removed.
DIKW (Data, Information, Knowledge, Wisdom)
Wisdom Knowledge Information Data
Fig. 5.1: From Data to Wisdom
Data: symbols Information: data that are processed to be useful; provides answers to “who,” “what,” “where,” and “when” questions Knowledge: application of data and information; answers “how” questions Understanding: appreciation of “why” Wisdom: evaluates understanding.
An approach to the challenges facing those active in the information economy has been put forward by Russell Ackoff who raised the idea of there being an information pyramid (as shown above in Fig. 5.1) (Ackoff, 1989). At the base of the pyramid are vast data resources. Data by itself has limited value. Above this
Sociology of Science
33
is information, which in itself has become a problem as ‘information overload’ had entered the vocabulary (one of the first being by Alvin Toffler in his 1970 book ‘Future Shock’). By the mid 1990’s the concept of knowledge had been built on top of the information stratum. Whilst information has become ‘structured data’, knowledge has become ‘actionable information’. Knowledge is about the filters in place, reducing the ‘fire hose’, or massive outflow of published material of what is available, to what we really need to know. Filtering and linkages are the new phenomena of the digital age. At the top of the pyramid is wisdom. These separate analytical studies of the information economy shed light on how important information has become within modern society. It extends over many aspects of society – and is not confined to a small scientific community. It is ubiquitous. It also emerged that information as a commodity has become important in its own right, and led to the phrase ‘information or knowledge is power’ becoming popular. Though we are dealing with an intangible product, it is vital for underpinning the progress of society, as the early pioneers – Drucker (1959), Machlup (1962), Bell (1973), Porat (1977), Castells (1996), et al. – have all demonstrated. Good information oils the machinery of the economy and fuels its expansion. The question which underpins this particular study is how equitable and evenly spread commoditised information is, particularly among academically trained users of research information. And if, as is hypothesised, there are barriers in place preventing interested parties from gaining equal and easy access to required information sources, how can this be rectified? After all, if over 50% of the working population rely on information in one form or another, the auguries are that scientific information has broader appeal than just within academia. The information needs of the unaffiliated knowledge workers (UKWs) have not been a priority in any of the main studies undertaken in the past.
Sociology of Science Another leading writer on the impact of the emerging information economy has been Robert K Merton (Merton, 1968). He introduced sociology of science as an intellectually respectable discipline. Merton was interested in the interactions and importance between social and cultural structures and science. Merton developed the ‘Merton Thesis’ explaining some of the causes of the Scientific Revolution, and also the Mertonian norms of science, often referred to by the acronym CUDOS. This is a set of ideals that are dictated by what
34 Chapter 5
Information Society
Merton takes to be the goals and methods of science and are binding on scientists. They include: – Communalism – the common ownership of scientific discoveries, according to which scientists give up intellectual property in exchange for extending their personal recognition and esteem. – Universalism – claims to truth are evaluated in terms of universal or impersonal criteria, and not on the basis of race, class, gender, religion, or nationality; – Disinterestedness – scientists are rewarded for acting in ways that outwardly appear to be selfless; – Organised Scepticism – all ideas need to be tested and are subject to rigorous, structured community scrutiny. But how does this impact specifically on the development of scientific publishing? What does it mean in terms of scientific publishing providing the glue which holds much of the development of science together?
Scientific Publishing Publishing the results of scientific research involves many agencies, from the funding organisations, research institutes and universities, publishers, libraries, researchers, their collaborators, intermediaries such as subscription agencies, etc, with the final outcome being a document which summarises the work which has been achieved with the funds made available. Linking in with this research process has been a number of bespoke services which identify the individuals, organisations and publications involved and such identification helps in tracking, monitoring and generally giving confidence to the research process. Science overall has been growing at a steady rate during the past few decades, though there is controversy over how much growth has occurred. The difficulty is with the measurement of science as an entity. One possible measure is to relate the growth of science to the growth in output of scientific publications. Though Mabe and Amin (Mabe & Amin, 2001; 2002) published a number of articles in the early 2000s to show that science publishing and Science grew at a steady 3% to 3.5% per annum, this has been challenged by some authorities, and figures in excess of 4% have been suggested (4.7% by Thomson Reuters, a leading organisation providing data on scientific publishing based on citation analysis).
Scientific Publishing
CYCLE OF UK RESEARCH FUNDING International Funding Agencies RESEARCH COUNCILS
Wellcome
Charities & Others
Organisational IDs
Overseas
GRANTS Author IDs (Elsevier Thomson)
Previous work PRINCIPAL INVESTIGATOR INSTITUTIONS
JISC NAMING AUTHORITY
Institutional (Ringgold)
RESEARCH TEAM
e-Internet Institute, Oxford – Research Outcomes
OUTCOMES
SUPPLEMENTARY INFORMATION
ID’s – DOI CrossRef DOAJ/DOAR
OPEN ACCESS
DUIs or Author IDs (Elsevier/ISI)
AUTHORS
CIBER, ISI, RAE
METRICS
Log Downloads Fig. 5.2: The research process in outline
Citations
35
36 Chapter 5
Information Society
The problem with these differences is in comparing like with like: for example, whether just the core natural science journal outputs have been looked at (which show a slower growth rate than some of the newer sci/tech disciplines); and whether conference proceedings are included (with conference proceedings being more important in some scientific fields with high growth rates). These variations have implications on the assumed overall growth rate in science and science publishing. Therefore it might be useful to understand more about the structure and size of the current science research and science publishing sector. Are they large dinosaurs or small sprats? Are they capable of changing direction, if such change is warranted, or are they more analogous to slow-changing sea-going tankers, reflecting on the length of time publishers may need to change their strategic policies? Do they have the information needed to make such sea-changes in their business operations and strategies to address new market opportunities? How reactive can they be and flexible are they? The above Fig. 5.2. shows how the various institutions involved in the spectrum of research funding relate to each other. This structure applies particularly to the U.K. research scene.
U.K. Status in Scientific Research In global terms, U.K. science punches way above its weight. It means that whilst geographically and demographically the U.K. is small, in terms of global scientific research the U.K. plays a significant role. The quality of the research effort, as measured by the number of citations it receives from other researchers worldwide, makes the U.K. a significant player on the international scene. As such, we also need to take account of global trends when looking at how such research results are disseminated. While the U.K. represents just 0.9% of the global population, 3.2% of global R&D expenditure, and 4.1% of global researchers, it accounts for 9.5% of downloads, 11.6% of citations, and 15.9% of the world’s most highly-cited articles (UKDBIS, 2009). Amongst its comparator countries, the U.K. has overtaken the U.S. to rank first by field-weighted citation impact (an indicator of research quality). Moreover, with just 2.4% of global patent applications, the U.K.’s share of citations from patents (both applications and granted) to journal articles is 10.9%. The U.K. is therefore highly productive in terms of articles and citation outputs per researcher or per unit of R&D expenditure. The observation that the U.K. punches above its weight also reflects the underlying well-roundedness and high impact of U.K. research across scientific disciplines. The U.K.’s field-weighted citation
Industry Facts and Figures
37
impact continues to rise despite a decreasing share of global articles, and this trend is broadly reflected across most research fields (with the exception of Social Sciences, Business and Humanities). During interviews with individuals in the academic sector from across the U.K. and abroad, international collaboration and researcher mobility were acknowledged as being crucial to the further development of the U.K.’s worldleading position as a research nation, particularly in light of the relatively limited inputs to the U.K. research base in terms of R&D expenditure and the number of researchers. U.K. researchers are not only highly collaborative and mobile across international borders, it appears they are also cooperative and mobile between academic and corporate sectors within the U.K. Traditional institutional and geographic boundaries are breaking down as far as research is concerned. This is important for UKWs – the research structure becomes de-fossilised and less dependent on academia. It is also apparent that the U.K. has been successful in commercialising intellectual property (IP) derived from academic research when compared with other countries for which comparable indicators are available. However, while the U.S. A. remains the world’s largest research base, recent trends indicate the relative standing of it and the other traditional research powerhouses like the U.K. may be eroded by competition from the emerging nations of the East, most notably China and India. Given China’s increasing rate of international collaboration and a net inflow of researchers, it seems likely that quality will follow. While the mechanisms of research funding and researcher training – as well as the economic context of national research bases – make direct comparisons difficult, it is clear that the global research ecosystem has become increasingly complex in recent years.
Industry Facts and Figures The world devoted 1.7% of its gross domestic product (GDP) to R&D in 2007, a share that has remained fairly stable since 2002. In monetary terms this translated into US$ 1,146 billion in 2007, an increase of 45% over 2001. This is slightly higher than the rise in global GDP over the same period (43%). Research is seen as the basis of and driver for economic growth and social advancement. Despite global economic problems, countries are still spending on R&D as a way of stimulating economic growth and buying themselves out of austerity. There are some regional differences in the support provided for R&D throughout the world. The developed world is maintaining a steady annual increase in its support for R&D. BRIC countries (Brazil, Russia, India and China) and VISTA
38 Chapter 5
Information Society
(Vietnam, Indonesia, South Africa, Turkey and Argentina) are expanding their research commitments at a faster rate albeit from a low base. The needs of developing countries for research information mirror the needs of the unaffiliated knowledge workers within the U.K. All these countries suffer from exclusion from the centre of research activity – they stand at the periphery looking in at the western research effort but are unable to interact or communicate with such research on a level playing field. Both the developing nations and UKWs in developed countries face the challenge of breaking down the barriers to scientific information access. The following data give an indication of how large the scientific information system is worldwide.
Publishing Revenues – –
– –
–
–
–
The scientific information industry worldwide – books, journals and databases – generated revenues estimated at $23.5 billion in 2011 (Outsell, 2013). The scientific and technical revenue growth rate from 2010 to 2011 was 4.3% (to $12.8 billion), and medical grew 2.0% in sales to $10.7 billion (Outsell, 2013) Scientific, technical and medical journal revenues alone in 2011 were $9.4 billion (Outsell, 2013) Journal publishing revenues in the U.K. come from library subscriptions to academic institutions (68% to 75% of total) and from corporate subscriptions (15–17%). This amounted to £112 million from universities and £75 million from others (RIN, 2012) An additional source of income is charges levied by publishers to an author to enable the authors’ articles to be read by all, and not just subscribers to the journals. These APCs (article processing charges) are of an unknown extent but currently assumed as being marginal (vary from $2.5k to $3.4k per accepted article). One publisher switched from a journal subscription to an open access business model. Hindawi, based in Cairo, Egypt, allegedly had a surplus of $3.3 million on revenues of $6.3 million in 2012 (indicating that new business models can be as lucrative as traditional models). The commercial scientific journal publishing industry is dominated by a few key players. These include Elsevier which alone publishes over 2,200 journals, Springer S&BM (which in January 2015 entered into a joint venture with Nature Publishing Company and now has a comparable number of journal titles), Wiley (which includes Blackwell Scientific), Taylor & Francis
Industry Facts and Figures
–
–
–
39
(part of Informa). These are commercial companies with a strong drive to meet shareholder expectations on profitability as much as satisfying user demand for publications (source: publisher press releases). There are many learned societies which balance their mission in support of education and training programmes for their professional members with maintaining commercial viability from publishing activities. Many U.K. learned societies are small and represent the ‘long tail’ of publishing. There are 330 members of the U.K.-based Association of Learned and Professional Society Publishers from 39 countries, but this is only a small selection of the learned society publishers worldwide (based on ALPSP web page, 2014). The U.K. publishing industry occupies a major position within global scientific publishing, and generates £800 million of annual export revenues. However, both the U.S.A. and the Netherlands are also key centres for commercial scientific publishing. In broad terms, 55% of global STM revenues come from the U.S.A., 30% from Europe, 10% from Asia/Pacific and 5% from the rest of the world.
Users – – – – –
–
– –
There are approximately 4,500+ research based institutions in 180 different countries. 9,227 universities are listed in 204 countries (168 universities in the U.K. alone) (source HESA, 2013). There are estimated between 7 million and 12 million researchers worldwide (Economist 19/10/2013 puts it at 6–7 million). Academia.edu, a new information service, puts the number at 17 million (includes postgraduate students). (source: Academia web site). The National Science Foundation estimated the range of science and engineering workforce in the US as being between 5 million and 19 million in 2010 (source: NSB, 2014). NSF estimates 5.4 million college graduates employed in science and engineering occupations in the US. This includes 2.4 million in computers/mathematical sciences; 1.6 million in engineering; in the life sciences 597,000; physical sciences 320,000. Scientists and engineers with S&E doctorates were split 46% in business sector and 45% education in the USA (NSB, 2014). Small companies are important employers of US S&E graduates – companies with less than 100 employees employ 37% of graduates in the US business sector
40 Chapter 5
– – – – – – – –
Information Society
Unemployment rates for those in S&E occupations are lower than those for the overall US labour force – 4.3% compared with 9.0% in 2010 (NSB, 2014) Between 1960 and 2011 the number of workers in S&E occupations grew at 3.3% pa, compared with 1.5% pa for the overall workforce Ware puts the number of users at between 6.5 and 9 million worldwide (Ware, 2012). Approximately 30 million are current readers of scientific literature. There were 132 million tertiary level students worldwide in 2004 (Weller, 2011). There were about 2.5 billion article downloads from publisher web sites per annum (plus an additional 400 million from other web sites) The universe of ‘knowledge workers’ is somewhere between 600 million and 800 million Academia.edu has 5 million scientists as users (source: David Worlock blog). 30 million article citations are made
One thing which stands out from the above statistics is the large range in estimates. It suggests that publishers in particular have been content to focus on a sector which they know best – institutions – and not look further afield and serve the more diffuse and difficult to reach ‘individual’ market of knowledge workers.
Output – – – –
– – – – –
Worldwide R&D expenditure is estimated at $1,146 billion (NSB, 2014). U.S. government alone spends $100 bill on research pa ($39 bill of which on basic research) (NSB, 2014). Globally 3 million STM articles are submitted each year to scientific journal publishers. Only 1.85 million articles were actually published in 2012 (the rest were rejected in their current form, but frequently subsequently recycled into other journals). Article output and journal titles are rising at least by 3.5% to 4% per annum (in line with research activity, see Ware & Mabe, 2012). 28,135 scientific journals (refereed, scientific, still active) were published in 2012 (though they are available in 55,311 different formats). Approximately 500 new journals are launched each year. The number of STM publishers is estimated at between 5,000 and 10,000. There is a ‘long tail’ of single journal publishers who may not regard themselves as being publishers. 657 publishers are responsible for 11,500 journals (or 40% of total).
Industry Facts and Figures
– –
41
Of these, 477 publishers (72%) and 2,334 journals (20%) were not-for-profit. 40 million articles are available digitally, back to the early 1800s.
Editorial – – – – – –
There are 350,000 editorial board members. 5,000 new editors are recruited by publishers each year, to add to the current total of 125,000 editors (ICSTI Conference Beijing, 2012). Over 5 million referees are included in the quality control system. 30 million+ author/publisher communications take place each year. 16,000 participate on Wikipedia. 230,000 open source projects are available.
Intermediaries – –
– –
–
Google accounted for 87 billion online search queries in 2009 out of global total of 131 billion Search services such as Google and Yahoo are growing more rapidly than the industry as a whole. The academic and scientific publishing segment is growing more slowly. (Outsell, 2011) In 2009 Wikipedia accounted for 55.6 million online searches Traditional intermediaries in the academic sector – journal subscription agencies and booksellers – have faced a torrid time over the past two decades, and many have ceased operations (the most recent example being Swets which declared bankruptcy in October 2014). Disintermediation by publishers has become a distinct commercial policy by a few dominant players
As a service industry in support of the creation and dissemination of research results, the scientific communications sector has many advantages. It is a solid sector, with a strong history of stability. There is a well-developed editorial and marketing infrastructure in place. It still has the support of the main body of academic authors and researchers in ensuring that quality is upheld. There are powerful brands in place in which authors and users place their trust. Despite these unique aspects, some of which do not gel well in a digital world, there is clearly a strong conservative, cautious approach by those who are at the coal face of scholarship and research – a conservatism which resists dramatic changes made in their traditional ways of operation.
42 Chapter 5
Information Society
Global R&D spend ($1,146 billion)
STM Publishing industry $23.5 bill
STM Journals $9.4 bill STM articles 1.5 mil pa (+3.5% pa growth)
Corporate R&D and Research Institutes Academia Worldwide academic/research institutions – 4,500 to 10,000 7 million researchers 132 million students
Readers 30 million
Knowledge Workers Worldwide – 600-800 million (est.) USA – 50 million UK – 11 million
Fig. 5.3: The STM Publishing Industry in Context
The key statistics referred to earlier can be summarised in Fig. 5.3. above. This summary sets the scene for some critical issues the industry faces, the first of which is the strive towards achieving optimal economies of scale.
Economies of Scale There is difference in scale between large commercial publishers active in the STM area – such as Elsevier, Springer Science and Business Media (S&BM), Nature Publishing Group, Informa (Taylor and Francis), and Wiley (Wiley-Blackwell) at one
Economies of Scale
43
end of the spectrum – and the small, highly specialised learned society publisher or university press at the other. It is not just a matter of scale. It is also the focus and the professionalism of the commercial approach being taken. There is a dynamic which is at play here. Traditionally, learned societies saw their publication programme as a key asset for their members, making sure that latest research in their particular field was published through the learned society’s own journal. In recent decades learned societies – particularly the highly specialised and generally small scale societies – have agreed for their publication programmes to be managed by and subcontracted to larger commercial companies. Economies of scale are important when reaching out to a global audience. A sophisticated sales and marketing support apparatus is required which can only be substantiated if the number and range of products being sold is extensive. It also needs investment in production and IT skills as the transition from print to electronic publishing takes place, and this adds to the cost base of the modern STM publisher. Learned societies feel unable to commit resources to support their publication effort if it means that other aspects of their corporate mission are compromised. (A later section of this book will review the various missions which learned societies undertake; see Chapter 23). However such developments are not without negative consequences. According to Professor Nerissa Russell, an anthropologist, Chair of the Cornell University Faculty Library Board, speaking at the March 2014 Faculty Senate Meeting, There’s been tremendous consolidation in the publishers, and things that used to be published on their own by learned societies are now being contracted out to these commercial publishers. There are about five commercial publishers, and they’re jacking up the prices to make money because they can.
Larger commercial publishers have indeed offered their more sophisticated infrastructure and support services to the small learned societies, and included their titles within their broader portfolio of offerings. In effect the larger publishers were buying market share at the expense of smaller learned society publishers – the larger publishers competed among themselves to offer attractive and better ‘reach’ in order to induce the societies to have their titles absorbed into a larger endeavour. Scale became everything, and the publisher with the biggest scale – Elsevier – also had one of the healthiest profit margins. In addition, there was a spate of merger and acquisition (M&A) activity among commercial publishers in the 1980’s and 1990’s as again an attempt was made to create greater economies of scale. An investigation into whether such
44 Chapter 5
Information Society
merger activity created a monopoly was undertaken by the U.K. Office of Fair Trading in 2001. It arose out of concerns about the merger between Elsevier and Harcourt (Academic Press), resulting in an Office of Fair Trading report in September 2002, which claimed that it would be inappropriate for OFT to intervene in the market at that stage. Merger activity has peaked again in late 2012 and again in early 2015 as the sale of Springer became a popular topic of media attention (UKOFT, 2002).
Growth in Science Although society is becoming more cerebral, scientific information represents only a small share of national economic activity. It has been described in the past as being a ‘cottage industry,’ reflecting the fact that the main scientific publishers in the years leading up to the 1980s were small in size relative to some other media giants. That changed during the last forty years as the Internet and digital publishing developed. During this time there has been a major swing in economic activity relating to scientific information from ‘scarcity’ to ‘abundance.’ The costs of creating and disseminating scientific information tumbled and more and more information appeared in new formats, even if the charges set for accessing it by publishers in general still related to an era of scarcity (which involved high prices and low circulations for their research journals). During the second half of the twentieth century, scientific information came into its own, and new high-level concepts were explored. Instead of the concentration on producing quantitative data to support the information economy the new emphasis is in describing the changing social scene. The focus became on how users of information were altering their habits in tandem with the new information environment. Authors such as Malcolm Gladwell (“The Tipping Point”), Chris Anderson (“The Long Tail”), James Surowiecki (“Wisdom of the Crowd”), Michael Nielsen (“Collective Intelligence”), David Weinberger (“Everything is Miscellaneous” and “Too Big to Know”), Don Tapscott (“Grown up digital”; “Being Digital” and co-author of “Wikinomics”), Clay Shirky (“Here comes Everybody”), Martin Weller (‘The Digital Scholar’), are some of the many writers who have pointed out that technological developments have an impact on research output (see chapter 14 for more detail). This in turn has affected user behaviour which in turn raises the question of the role of a broader knowledge worker community within the new information economy.
The Twigging Phenomenon
45
One significant development was the introduction of the concept of the ‘Long Tail’ (Anderson, 2004; 2009a) which has begun to replace the Pareto Principle (Pareto, 1971) as an informatic tool. Evidence is available to suggest the Internet’s ‘Long Tail’ has dramatically changed the business profiles of many consumer industries accessed through the Internet (such as books on Amazon; music on Apple store, etc.). As will be argued later, the ‘Long Tail’ can also be applied to the audience profile for scientific research output.
The Twigging Phenomenon However, the nature of the scientific activity is such that there are always new subject areas being created as the frontiers of Science are pushed out ever further. Science fragments into ever smaller sub-disciplines – the so-called ‘twigging phenomenon’ (Small, 2006). Each new sub-discipline is a breeding ground for a group of like-minded researchers to come together and create a new learned society, and this leads to the development of a common forum for the exchange of relevant information. Such groups want the published output from their members to get international visibility, through the medium of a learned journal, and so the growth of publishing potential increases at grass roots level. And as the economy of scale is faced by these new and emerging like-minded groups, with their aim to achieve global recognition, so many of these new titles are eventually subcontracted out to commercial publishers for commercial exploitation. This means that scientific publishing has become a highly competitive process with each publisher seeking to be the first to establish a journal or book series in any new research area, to sign up the best editors, and to offer the best service to authors. Each such title would be unique even though like-minded titles focus on the same area – they are not necessarily substitutable as each would carry separate and different reports of individual discrete research findings. They differ only in perceived personal assessment of quality and brand, something which is still subjective. The following figure (Fig. 5.4) exemplifies the fragmentation of one part of the physics research area into new subdisciplines, which in turn fragment or ‘twig’ into smaller specialities over time. This is indicative of how many science areas create new publishing opportunities as research fronts progress.
46 Chapter 5
Information Society
narrow baryon state
541
electroweak breaking
exotic nuclear structure
215
B-hadron production
699
fundamental constants
568
air fluorescence yield
neutrino mixing matrix
738
1649
gamma-ray bursts
supersymmetric dark matter 1202
725
power spectra
2009
660
quasar survey
binary black holes early galaxy structure
525
cosmic microwave background
108
1491
138
brane cosmology
587
2429
galaxy clusters 1752
487
superconformal theory
first stars 894
interstellar turbulence
965
transiting extrasolar planets
Fig. 5.4: Example of ‘Twigging’ in Physics Sub-Discipline
Paradox in STM Publishing Traditional scientific publishing displays unique and distinctive characteristics, different from the other main publishing areas. This study takes the position that such a formal structure has become too inflexible to enable the full potential of new untapped market sectors (such as UKWs) to be reached. In essence there are two distinct, slightly overlapping but competitive processes: – On one hand there is the ‘closed circle’ of scholarship whereby authors are also readers, and as authors they seek recognition and esteem for their published work rather than expecting a financial return. They give up some rights over their published results to third parties in order to have their research efforts made available to their peers throughout the world. Achieving worldwide recognition was the main recompense they sought, as this often became the source of additional funded research, academic tenure,
Information Overload
–
–
47
recognition, and/or personal career advancement. No money changed hands – it is a gift economy. Parallel to this is payment for the quality control, production and dissemination of research results. This is handled by research funders, publishers, subscription agents and institutional libraries. The latter are provided with the funds with which to buy the output from the publishers. Money is at the heart of this process, a transactional economy. The proposition is that there is a migration from a transactional to a gift economy as the sector moves towards digitisation.
Information Overload Pervading the whole discussion about information and users is how, in a digital society, people cope with information overload. An early proponent of information overload was Alvin Toffler (1970) who claimed in his book ‘Future Shock’ that this was a psychological syndrome experienced by individuals, rendering them confused and irrational. Whereas information overload used to be considered a psychological syndrome by Toffler it is now viewed as a cultural condition. We now worry that it is not too much information we are getting but rather not enough. Two options have emerged. Individuals can use arithmetics, having reliance on large databases to collect and sift information on their behalf. Alternatively, or in addition, the social construct of using colleagues and friends to point individuals to relevant items is employed. According to Shirky (Shirky, 2008), any problem we have with information overload is a filtering failure. Whilst in a pre-digital world we relied on the old quality sources such as newspapers, journals and textbooks for filtering, in the digital era there has been a shift towards the selection provided by online search engines and also informal and social media as filtering services. Drs Jan Velterop, has commented on his blog (Velterop, 2014) that there are two issues associated with information excess. There is now so much information available in any given field, much more that anyone can find using traditional literature searches. “The time is long gone when it was possible to go to a small set of journals to find pretty much all needed.” Resulting from this, researchers arrive at conclusions based on a small subset of information available in the area concerned. Secondly, Velterop claimed there is much duplication of research as researchers are no longer made aware of similar and related research going on elsewhere. As both these factors “will only add to the noise and volume within the system, the situation promises to get progressively worse.” (see Velterop blog, 2014).
48 Chapter 5
Information Society
The point is that the information economy is in a state of flux, and this book therefore aims to identify drivers which are creating this flux and which leads to changing habits of knowledge workers. Whether as a community they – the knowledge workers – can be absorbed more effectively within the emerging information industry, or will they remain, as now, on the periphery of scientific endeavour continuously looking in, will be commented upon.
Implications for Unaffiliated Knowledge Workers From evidence presented STM journal publishing represents a small part of an overall large and expanding part of the U.K. information economy. Sci/tech publishing industry accounts for £15 billion globally, and journals half this total. In terms of quality (citations) the U.K.’s scientific research has commendable quality in global terms. Nevertheless, it is at a cusp of change and elements of a perfect storm impact on research, academia and research outputs. It is apparent that the culture of scientific communication is characterised by its diversity. Twigging ensures that there is a growth in output, resulting in information overload. Publishing has its roots in the quality control procedures – ‘refereeing’ is a cornerstone to STM. Tradition still plays a large part in determining how STM publishing is undertaken. It is a volatile industry, only prevented from falling off the cliff by its reliance on R&D funding from public and large corporate resources. These have a need to ensure innovation is successful and efficient so that society, corporations and the economy improves. There is a closed and conservative community. Unaffiliated knowledge workers are not an intrinsic part of this community, and as such are locked out of the established scientific communication system. They are outsiders. They have rarely been involved either as creators or users of STM research. Yet they could be beneficiaries provided there was relaxation of access procedures for UKWs to access research results. Are there developments on the horizon which could enable the unaffiliated to become active future participants in the information industry? The following chapters look at the various drivers for change in the information and research processes and whether these will release UKWs from their current constraints.
Chapter 6 Drivers for Change Technological Advances The end of the twentieth century saw innovations being introduced in the field of communications every bit as significant and powerful as the change witnessed by the introduction of moveable type by Johannes Gutenberg in the fifteenth century, and the subsequent launch of scientific journals in the mid seventeenth century. Underlying the spread of electronic information is ‘Moore’s Law.’ Gordon Moore (Moore, 1965), formerly chairman of Intel, pointed out that every eighteen months the number of transistor circuits etched on to a computer chip doubled. This ‘law’ has now been in existence for almost 50 years – a tenfold increase in memory and processing power every five years. As a technological driver it is behind the current fall in prices of personal computers, the rise in popularity of many new devices such as smartphones and tablets, and more generally the democratisation of accessing large collections of published information in digital form through remote online access. At the same time as Moore’s Law was making its impact on hardware, there were similar technical developments taking place in telecommunications. The total bandwidth of the communications industry, driven by the improvements in data compression as made possible through the fibre optic strands through which data can now pass, is tripling every year. This effect is often referred to as ‘Gilder’s Law’ after George F Gilder, a US investor, economist and author of ‘Wealth and Poverty’ (Gilder, 1981). A further related ‘law’ is ‘Metcalfe’s.’ Robert Metcalfe is an electrical engineer from the United States who co-invented Ethernet and founded 3Com. Metcalfe observed that the value of a digital network is proportional to the square of the number of people using it. The value to one individual of a telephone is dependent on the number of friends, relatives and business acquaintances that also have phones – double the number of friends and the value to each participant is doubled and the total value of the network is multiplied fourfold. Through a combination of these technological laws, one sees that there is an escalating process whereby technology advances the prospects for digital publishing. Though content itself is driven by other (editorial) forces, technological advances provide the means whereby the content – books, journals, articles, data and supporting multimedia – can flow quickly and efficiently through the
50 Chapter 6
Drivers for Change
research system. In addition they create a new technical infrastructure for information which supports new digital media. This also supports a changed environment within which both knowledge workers and academic researchers can operate. UKWs now have access to broadband. They are able to purchase laptops, smart devices, for nominal amounts, and the processing power built into such miniature devices is increasing dramatically. Networks of users are being created with communication being both cheap and reliable. UKWs are beneficiaries of all these technological trends, and brings them to the doorstep of the digital revolution in scientific research
The Internet The impact of the Internet on scientific communication in recent years cannot be over-exaggerated. It has transformed the information seeking habits of researchers in a number of ways. According to David Weinberger in his book ‘Too big to know’ (Weinberger, 2012), there are several aspects of the Internet which have caused changes. The Internet connects many people. The worldwide population is estimated at over 7,000 million, of which 2,300 million are connected to the Internet (and over 1,000 million are on FaceBook). This is a huge reach into society, a massive change achieved within a decade or so. This has spawned concepts such as ‘the wisdom of crowds’ (Surowiecki, 2004) which challenges the authority of the expert and highlights the importance of a broader input from an audience with wide experiences in making decisions. It has since led, for example, to crowdsourcing (Howe, 2006) as a research methodology. In the past, working in small teams on carefully circumscribed research topics may with hindsight have proven to be inefficient – particularly in comparison with the exposure of the research problem to many unrelated researchers from different backgrounds who can provide the right answer. The Internet facilitates such cross-disciplinary, cross-institutional, cross-geographical boundaries interaction. This has led to ‘collective’ intelligence, a concept described by Michael Neilsen in his book ‘Reinventing Discovery – the new era of Networked Science’ (Neilsen, 2011). The Internet is sticky and lumpy. It creates and consists of a whole series of sub-networks, each having specialist skills on call.
The Internet
51
The Internet is cumulative. The Web retains everything we post to it and makes the whole history of scientific progression easy to follow. It is an open record or archive of all that has been said that is good, bad, indifferent, new and innovative at the time. The ‘cloud’ of linked computer power provides almost limitless storage of digital records. It provides the basis for ‘standing on the shoulder of giants’ (Newton, 1668), for establishing precedence even if it lacks the professionalism involved in permanent archiving and curation. The Internet scales indefinitely. It allows for unprecedented back and forth communication through services such as FaceBook, Figshare, Twitter, LinkedIn, etc. Millions of people can participate, but equally small groups of tens or more can take part in a highly specialised and targeted dialogue through Internet. According to Weinberger (Weinberger, 2012), “the complex, multi-way interactions the Net enables means that networks of experts can be smarter than the sum of their participants.” This opens up a whole new approach to research interaction and effectiveness whereby the dominant force is no longer the skills of a few experts but rather the interaction of a broadly-based crowd. The inclusion of as much expertise from as wide a group of researchers – including knowledge workers outside academia – is a significant value-add offered by the Internet. A key point is that these technical advances are not gradual – they are incremental and escalating in speed of implementation. The STM information scene which existed ten years ago is different from that which exists today, and will be different again within the next five years. It would be wrong to assume that there will be a return to the same way of conducting research and producing research outputs which existed decades ago – technology provides new opportunities for more efficient ways of doing things. Fig. 6.1 illustrates some of the Internet services which have exploded onto the scene since the arrival of the Internet. It shows how ubiquitous online services have become and the variety of services they perform for a digitally aware population. It should also be noted that this snapshot was taken in 2007 – it would be difficult to illustrate the current extensive range of Internet delivered services in an absorbable manner. Though only a few of these services relate specifically to areas of scientific research is raises the possibility that some traditional inequities in the scientific communication process, such as the fate of the disenfranchised knowledge workers, could be resolved as further technical advances are implemented. Assuming, that is, that the mindset of the research community is amenable to the adoption of new technical advances in communications.
52 Chapter 6
Drivers for Change
licensed under CC Attribution-NonCommercial-ShareAlike 2.0 Germany | Ludwig Gatzke | http://flickr.com/photos/stabilo-boss/
Fig. 6.1: Social media sites available on the Internet
Social Developments Alongside the changes in technology there are signs of a social revolution taking place. Several pundits have commented on the nature of social change (see chapter 5, ‘The Information Society’). These are commentators who are reporting on the second phase of the Information Economy alluded to earlier – the swing from a printed to digital environment; from one of scarcity to one of abundance.
Social Developments
53
The focus became on how users of publications were changing their information habits in tune with the new environment. Many writers (see chapters eight and fourteen) have pointed out that technological developments have an impact on research output. This in turn has affected user behaviour which in turn raises the question of the role of a knowledge worker community in the new information economy. The various titles of their books (chapter fourteen) bears testimony to the social changes which these authors claim are underway. The common theme is that greater openess and democratisation will occur. It is also felt that the changes in the consumer-sector will inevitably spill over into the scientific communication arena, and result in further changes to acceptable modes of communication. The ‘long tail’ (Anderson, 2004; 2009a) claims that Internet channels exhibit significantly less sales concentration compared with the 80:20 concentration traditionally held as part of the Pareto Principle. Evidence is available to support the fact that the Internet’s ‘long tail’ has dramatically changed the business profiles of many consumer goods bought through the Internet. As will be argued later, the ‘long tail’ can also be applied to the audience profile for scientific research output, with researchers in academia being the ‘core’, and UKWs being part of the ‘tail’. There have also been studies which highlighted generational differences in science communication. The so-called ‘X,’ ‘Y’ and ‘Z’ generations, or distinguishing between the Net Generation (Net Gens) from those who preceded the arrival of the Internet, has become a popular topic for discussion. Whilst it is not clear the extent of the changes created by generational differences, it does appear that those academics and professionals who are starting their careers both within and outside the higher education system have a different propensity to adopt digitally-delivered information systems from those who have been brought up in a print-only world. Those who grew up pre-1980s still rely – by and large – on indepth reading of research articles to get their information fixes. Those who grew up after the 1980s engage in multimedia activities, browsing, ‘skimming’ and multi-tasking. Effects of having grown up on a diet of interactive games, smartphones, iTunes and online systems in their formative years have now made it easier for them to accept alternative, digitally-based, STM information dissemination systems. The jury is still out on some aspects of these claims, but there is growing debate about how and to what extents online communication is altering the way researchers act as a whole. This will have a bearing on how the changing social needs generally can be mapped onto the specific needs of unaffiliated knowledge workers in gaining access to STM research output in future.
54 Chapter 6
Drivers for Change
Rewiring the Brain Resistance to change, conservatism with regard to scientific publication, has been a powerful factor in supporting the established print-derived paradigm. The reasons are that the traditional system supports the existing reward system which is currently (mainly) assessed through citation metrics. The effectiveness of authors in research is often judged on whether their output appears in the highest cited impact factor journals. Users of research output have followed the lead of the authors and rely on the published research article in a reputable journal as their main source for credible information. However, from recent research there are suggestions of a changing mindset on how people adapt to the challenges facing information acquisition in a volatile digital world. Research is focused at an individual neurological level, but the implications in aggregating the results to the entire group or community could be profound for scientific communication. One popular report was made by Dr Eleanor Maguire, a cognitive neuroscientist at University College London, who showed that the brains of London’s taxi drivers’ change as drivers develop their knowledge of the city’s streets. The part dealing with navigation – the hippocampus – is larger compared with those in other people (Maguire et al., 2000). It suggests the brain is like a muscle, and like a muscle can be enhanced with exercise or atrophy if not used regulary. More recently David Eagleman has described how complex the interactions within the brain are in his book ‘Incognito: the secret lives of the brain’ (Eagleman, 2011). There are conflicts, adaptations and complicated procedures taking place within the brain, both in its conscious state and in the extensive unconscious area of memory. All impact on the way individuals adapt to external circumstances. According to Nicholas Carr in ‘The Shallows’ (Carr, 2010), “A new intellectual ethic is taking hold. The pathways in our brain are [again] being rerouted.” Neuro-plasticity, which describes how the synapses within the brain adapt to a changing environment, is the internal mechanism which enables us to cope with new technological challenges. Carr claims that the synapses require reinforcement to remain ‘live’ as part of his broader claim that ‘Google is making him stupid.’ He no longer reads articles or books in depth, and he and his colleagues find it increasingly difficult to concentrate on lengthy text. Less contemplative and in-depth reading is taking place. Reliance on snippets of information, gained through powerful search engines, is affecting the habits and practices of researchers, academics and knowledge workers worldwide. It enhances the use of metadata over stuff (fulltext). One set of synapses are in decline; another set are being reinforced. The result is that the mode of communication will change
Education and Research
55
as our reading patterns change. Lengthy descriptions in scientific reports become passé as the main vehicle for scientific communication. Though this significant if somewhat sombre view is not universally endorsed, it does support suggestions that traditional information habits may be modified as a result of environmental developments – that extrapolating on something which was acceptable in a printed world is no longer appropriate as the world becomes more and more digital and networked. More specifically, it has implications on how information should be formatted to cope with the needs and habits of a wider knowledge worker audience. In a related vein, Matthew D Lieberman has explored individual’s need to connect with other people, a fundamental need he claims as basic as that of food or shelter (Lieberman, 2013). He further asserts that because of this our brain uses its spare capacity to learn about the social world – other people and our relation to them. It is believed that we must commit 10,000 hours to master a skill (see ‘Outliers; the story of success’ by Malcolm Gladwell who says that it takes roughly ten thousand hours of practice to achieve mastery in any given field – (Gladwell, 2008)). According to Lieberman, each of us has spent many hours learning to make sense of people and groups by the time we are ten. The brain has evolved sophisticated wiring for securing our place in the social world. This wiring often leads us to restrain selfish impulses for the greater good. These mechanisms lead to behaviour that might seem irrational, but is really just the result of our deep social wiring and necessary for our success as a species. It also becomes an essential support for ‘sharing’ of information which has become a critical aspect on the digital/Internet world and scientific research. Differences between traditional and emergent researchers also find their roots in the education policies being pursued within society.
Education and Research This book describes how the higher education system produces graduates and postgraduates who are knowledgeable about their subject areas, and that as a proportion of society such an ‘educated’ group is growing. To quote an article by William Park (Chief Executive Officer of DeepDyve, a U.S. based company seeking to sell articles to individuals in emerging markets): In 1930, 25% of the US population of 122 million lived on farms and only 3.9% of the population had a college degree. Fast forward to 2006: just 2% of Americans live on farms, the US population had nearly tripled, 17% of Americans held a bachelor’s degree and nearly 10% a graduate degree (Park, 2009).
56 Chapter 6
Drivers for Change
Targets set by successive governments in the U.K. whereby the share of college and university trained students increase as a component of British society, supports this trend towards greater scholarly/scientific awareness. Such a commitment to raising educational attainments in both developed and developing countries creates an environment within which the dissemination of relevant research results provide a more fertile ground than that which existed in earlier decades. In the U.K. specifically, 82% of U.K. postgraduates and 71% of U.K. graduates in 2010 left academia on completion of their courses to find employment either in industry, commerce, the professions or elsewhere (Higher Education Statistics Agency, 2013). In so doing they took their academic knowledge base with them into their careers. These are educated to standards comparable with the 20–30% who remained in academia; they can no longer be separated on the basis of lack of specialist knowledge. What does distinguish them is that they have greater difficulty accessing those publications which were basic tools used as part of their formative higher education. Why? Because the subscription system within which research output is mainly disseminated comes at a price which exceeds the capabilities of all except those served by the largest research institutions and their research libraries. Fig. 6.2 illustrates the breakdown of destinations by a large sample of graduates from UK universities in 2013 (HESA, 2014). It tells a picture of how universities are feeding the ranks of the UKWs on an annual basis. Most important activities of full-time first degree leavers 2012/13 Studying
Working
Qualification aim
Employment basis
Higher degree, taught
Fixed-term contract Higher degree by research PG Diploma/ Certificate (inc. PGCE)
Permanent or open-ended contract
On an internship
Voluntary work Portfolio
Other/unknown
Unemployed
Temping (including supply teaching)
Start-up own business
Self-employed/ freelance
Other/not aiming for formal qualification
Source: HESA Destination of Leavers from Higher Education 2012/13
Fig. 6.2: Employment destinations of UK graduates, 2012/13
Unemployed and looking for work
Due to start work
Professional qualification
Other Taking time out to travel
Something else
© Higher Education Statistics Agency 2014
Education and Research
57
This democratisation in education becomes a powerful force in opening the gates of STM information systems.
Universities Meanwhile, the university as a viable institution faces its own set of challenges. It may need to adapt in order to withstand global competition from online learning centres. Massive Open Online Courses (MOOC) are aimed at offering unlimited participation via the web. In addition to traditional course materials such as videos, readings and problem sets, MOOCs provide interactive user forums that help build communities for students, professors and teaching assistants (TAs) inside as well as outside an institution. Also, many universities will face competition from larger universities which have strong international brands; new business models which include forging partnerships with industry may be required. In essence the university will no longer be able to rest on past laurels and remain isolationist but will need to adapt to new market conditions. In the U.K. the new student fee structures leads to greater expectation of university provided support services, including access to scientific information resources. Indications of new approaches are visible in the strengthening of the relationship between private and public sectors in research – on the emergence of open innovation, of global collaboratories and projects. The central idea behind open innovation is that in a world of widely distributed knowledge and information, institutions cannot rely solely on their own local research expertise, but should instead cooperate with other organisations whose experts can provide a different perspective on the research project. In such instances there is greater openness, sharing and democracy rather than the closed-door innovation and central control which typified the traditional singleton approach to research within academia. A more democratic approach, less protected behind academic garden walls, may provide the basis for widening access to a broader segment of society. Democracy carries with it many possibilities, much revolutionary potential. Certainly such a changing environment would create fewer difficulties for UKW researchers to gain access to the latest output of research. Are we seeing that the old order is no longer fit for purpose in disseminating research output? If so, will such challenge to the existing order provide scope for UKWs to be engaged within the emerging research environment? How will the existing STM publishing system cope with these various challenges? Will information democracy prove to be a bane or boon?
58 Chapter 6
Drivers for Change
Two main strands have been identified above as far as scientific information is concerned. Firstly, there are the sociological/demographic trends which are often overlooked as determinants of a new information system. Yet they have a profound influence. Secondly, there are environmental changes involving technology. The new digital age and the Internet culture exerts pressure on established routines, none more so than in scientific publishing. The interaction of the two – social and technological – have significant implications on the psyche of the researcher, and could open up the possibility for UKWs to become active participants in the research process given a suitable framework. Much depends on how the existing stakeholders, notably publishers, are able to adapt to the changing social and technical environments. This conundrum will be addressed in the next chapter.
Chapter 7 A Dysfunctional STM Scene? Tensions within the Existing System In some minds the STM industry is not a fair or equitable system within the digital world. As a result, governments in the U.S.A., U.K., Australia, Canada, the European Commission and several other countries have been addressing what has been called ‘the serials crisis’ within scientific publishing and what it means to their respective information economies. Scientific journal publishers emerged over centuries as the provider of formally published research output in a structured and quality controlled way, and research libraries became the agencies which were given the funds with which to purchase published scientific research on behalf of their patrons. This dualism became the main operational infrastructure enabling a smooth transfer of high level knowledge within society. However, in so doing it has operated under two conflicting business cultures. On the one hand, the published output, or ‘Supply,’ continues to grow. At the heart of the expansion in supply of scientific information is the keen competition which exists between researchers, and their attempts to advance their careers through publishing more and better articles. Competing for the few tenured jobs available, or winning research funds, is often through presentation of the best CV’s and papers published in high impact journals. This is a prime driver for increasing the supply of available publications. However, it is within the context of society’s support for R&D funding which seeks to exploit the results of research activity to improve economic performance and international competitiveness. The combination of micro level competitiveness and macro level support for research are powerful stimuli for continued growth in supply of research output. ‘Demand’ is driven by an unrelated set of issues. As Science continues to grow at over 3.5% per annum, as student levels worldwide increase at 6% p.a., and as technology offers an ever expanding array of applications and data resulting from research, the expansion of a research library budget would in principle need to be comparable to cope with demand. However supply outstrips demand to an increasing extent, and the budget necessary to buy a constant collection necessary to satisfy its clientele would require percentage expansion in double digits. This has rarely been achieved. At best library budgets have remained static in recent decades; at worst their buying power have diminished as book/journal prices have increased over most other price indices.
60 Chapter 7
A Dysfunctional STM Scene?
The rate of price explosion can be seen from the graphs of book and serial costs in the U.K. from 2002 to 2012 collated by the Library and Information Statistics Unit (LISU) based at Loughborough University. Similar data is made available in North America from SPARC. 2,400
140
2,100
120
£ million
1,500 80 1,200 60 900
Staff & students (000's)
1,800 100
40 600 20
300
0 02-3
0 03-4
04-5
05-6
06-7
07-8
08-9
Total bookspend
Real bookspend *
Real periodical spend **
Academic staff & students
09-10
10-11
11-12
Total periodical spend
Source: SCONUL/LISU database, Loughborough University, UK
Fig. 7.1: Total expenditure on Books and Periodicals in the UK, 2002–12
Fig. 7.1 shows the real decline in expenditure on books and journals in the UK university sector despite an upward growth in academic staff and students – the people being served by the libraries. The percentage growth in periodical prices by discipline in recent years has also been charted by LISU, and Tab. 7.1 illustrates that price increases by the main publishers serving academic libraries have risen between 5% and nearly 7.5% per annum for the main science, technology and medicine periodicals. It is apparent that there has been a sizeable commitment to the purchase of serials in the hard sciences by research libraries in the U.K. This is in contrast with their commitment to purchasing monographs which has declined relatively over the years, and the marginalisation of support given for humanities and social sciences.
£. £,. £,. £,. £. £. ,
£. $,. € .
. . .
£. $,. € .
. . . . . .
% increase over ()
£. £,. £. £. £. £. ,
. . .
. . . . . .
% increase over ()
() over . () over . Source: SCONUL/LISU database, Loughborough University, UK
Social sciences Science Medicine Technology Humanities General No of Titles() Average all subjects() UK USA EURO Region
Average prices
Tab. .: Average growth in periodical prices in the UK by subject area
£. $,. € .
£. £,. £. £. £. £. ,
. . .
. . . . . .
% increase over ()
£. $,. € .
£. £,. £. £. £. £. ,
. . .
. . . . . .
% increase over ()
£. $,. € .
£. £,. £. £. £. £. ,
. . .
. . . . . .
% increase over ()
62 Chapter 7
A Dysfunctional STM Scene?
Research libraries funding sources are divorced from the creation of research results. Budgets are not being aligned with output, and as such research libraries have difficulty maintaining a credible collection in an era of ‘digital information overload’. The consensus is that overall research library budgets are at best holding steady or in many instances declining. They are certainly no longer in the position to absorb ongoing journal price rises.
‘Frustration Gap’ The imbalance between supply and demand forces can be quantified by looking at the expenditure by a country in its national research and development budget as compared with the expenditure on research libraries during the same period. For the United States such a comparison is illustrated below. The growing gap between the Supply and Demand systems is evident. The diverging nature of the two lines – supply of and effective demand for scientific material – has been referred to as the ‘frustration gap.’ Another metric showing how librarians have had difficulty coping with the inexorable growth in publishing output is the falling share of the library budget, in comparison with the overall institutional budget. Between 1994 and 1999 the percentage share of the institutional budget of leading higher education (university) institutions in the U.K. was approximately 3.8%. Since then there has been a gradual decline each year down to 3.4% in 2005. This has steadied in recent years, with the percentage being 2.02% in 2005 falling down 1.82% in 2009 (source SCONUL Annual Returns). In the U.S.A. there has been a similar decline with 3.83% of the institutional budgets going to the library in 1974, only to fall to 1.90% in 2009. Research libraries have difficulty selling themselves as being important within their institution when budgets get tight – and therefore not getting a constant slice of the institutional budget. Whether this reflects increased competition for the institutional budget, or whether it indicates the declining relevance being attached to the library and its role in the institution, is difficult to say. But there is concern that libraries cannot use the same metrics to advance their financial cause as can other departments within the same institution. It may be easier for any given research centre or department to demonstrate tangible financial payback from additional investment by the institution in terms of additional students, extra postdoctoral fellowships, more research grants, etc, than is the case with the library. Payback which comes from an expansion in the library is less demonstrable. It is an intangible infrastructural support service – in many cases desirable rather than seen as an absolute necessity.
‘Frustration Gap’
63
Growth in Research & Library Spending 1976-2003 390
Main cause of the crisis
Average ARL Expenditure Constant $ 340
Index (1976=100)
Academic R&D Constant $ 290
240
190
140
90 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003
est
Year
Sources: ARL and NSF
Fig. 7.2: U.S. Academic R&D Expenditure and ARL Library Budgets, 1976–2003 in 1982 Constant Dollars
Percentage of total institution spending
Fig. 7.3 illustrates the decline in the library share of all types of higher education institutions in the U.K. The graph in Fig. 7.4 shows the decline in library budgets as a proportion of the total spend of major U.S. higher education institutions. Old universities New universities HE colleges 4.0 3.8
3.7
3.8
3.7
3.8
3.8 3.7
3.6
3.6
3.6
3.6 3.5
3.7 3.5
3.4
3.6 3.5 3.2
3.0
3.2
3.4
3.4
3.1
3.3
3.2
3.2
3.0
3.1
3.1
3.2
3.1
3.0 2.9
2.9
2.9
2.7
2.9 2.8
2.8
2.8
2.5 92-93 93-94 94-95 95-96 96-97 97-98 98-99 99-00 00-01 01-02 02-03 03-04 04-05 Fig. 7.3: Library as % of total institutional spend in UK Higher Educational establishments, 1992–2005
64 Chapter 7
A Dysfunctional STM Scene?
4.00 3.80 3.60 3.40 3.20 3.00 2.80 2.60 2.40 2.20 2.00 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999200020012002200320042005200620072008
Source: Association of Research Libraries, 2009
Fig. 7.4: Library Expenditures as a Percent of University Expenditures for 40 ARL Libraries, 1982–2008
‘Serials Crisis’ The confluence has led to the ‘serials crisis’ which faces the community. Several measures have been adopted to reduce the pressure created by journal subscription prices. ‘Site licences,’ enabling libraries to get more bytes for their buck, have been introduced by the larger publishers and publishing consortia. This allows for more journals to be delivered at lower unit costs, providing the library commits to taking more of the publisher’s output. It means that the total amount paid to each publisher increases. Publishers have also offered the facility for end users to buy individual articles on demand from their web site, at what is generally considered by both the library and end users to be at extremely high prices. More recently, the debate has been about whether a national site licence would enable greater reach to be achieved, in particular whether a single licence with a publisher to supply electronic material to all higher education institutes in the UK is feasible. Experiments to this end have been initiated in Scotland (SHEDL) and in Wales (WHEEL). Similar experiments are being undertaken, notably in Wales, with regard to providing ‘walk in’ access to online pub-
Investors versus Customers
65
lications, subject to rights and permissions. Other experiments are taking place to allow alumni to gain access. Providing access through public libraries is also about to be assessed within the U.K. (the Access to Research project), and a separate scheme for the National Health Services has also been launched by Jisc Collections (see chapter 24 on Business Models). The concern among publishers is that material could be diverted away from the intended licensed user group into the more lucrative corporate and international markets, which would reduce the publisher’s overall revenues. Coming to terms with this dichotomy or imbalance between Supply and Demand is crucial in supporting a healthy information service. Unfortunately, an open-minded approach to relating what is perceived as a ‘need’ with creating an appropriate ‘serviceable system’ is not always pursued by stakeholders. There is emotion clouding key issues, with vested interests being promoted by those pushing a particular partial agenda. The inelasticity of demand for individual journal articles; author’s material and the peer review system both being provided free to the publisher; the dominance of the scientific publishing industry with five key commercial players responsible for over half the number of research articles published each year; and the business model under which publishers operate which demands payment before the delivery of goods, are all factors in creating concern and setting the course for the industry’s future on a potentially uneven keel. As a result, some pundits claim that the traditional system is creaking. It is no longer fit for purpose (U.K. House of Commons, Science and Technology Committee of Enquiry on Scientific Publications, 2004; McGuigan & Russell, 2008).
Investors versus Customers Scientific journal publishing has become commercially driven, dominated by a few large international journal publishers with the interests of their shareholders being just as important as satisfying the needs of users and the research community. Even those organisations which nominally operate under the banner of ‘non-profit’ (such as learned societies) often pursue commercialism (‘surpluses’) as intensely as their for-profit competitors. It is inconceivable that the interests and different objectives of both end users of research output and stockholders seeking optimal financial returns, as well as those concerned about the efficacy in the mechanism for supplying a ‘public good’ can all be met given the business practices which are currently in place. Commercial journal publishers – notably the big five – need to persuade their investors and the financial sector, as well as the library community, that they are acting in their best interests. The financial sectors look at company
66 Chapter 7
A Dysfunctional STM Scene?
balance sheets to see whether they achieve short term financial expectations. Whilst most traditional players in the scholarly communication system assess editorial strengths, or global marketing coverage, or new product launches, the City financiers are more interested in financial returns and business relationships. It is the clients who receive bulletins from the financiers who are important in determining where investment money goes in future, particularly as several of the key players are owned by investment houses and venture capitalists whose main interest is in seeing that their returns from the company achieves payback within a specific, usually short period of time. Investment services, such as media equity researchers at Exane BNP Paribas, keep a watchful eye on the annual financial reports issued by each of the commercial journal publishers. This means that a key objective of scholarly publishers – as well as achieving a credible portfolio of products and services – is to satisfy the demands of their investors. Without their financial support publishers would see their external funding sources drying up. What does this mean for a scholarly publishing company? It means that it has to produce financial figures each year which are strong and healthy. However, users and libraries which currently buy their products, become alarmed when such healthy figures go beyond what is judged acceptable, leading to charges of ‘price gouging,’ greed, and dysfunctionality within the system. Tab. 7.2 shows the revenues and operating profits which several of the largest commercial, university press and society publishers declared in their 2013 corporate statements. It should be pointed out that these figures include revenues from other publishing activities and not just scientific journal publishing, though the latter often exert a heavy and positive influence on total returns. Tab. .: Revenues and profits from the major STM journal publishers () Company
Revenues (in £mil)
Operating Profits (in £mil)
Profitability ratio
Year end
Elsevier (RELX Group) Wiley Taylor & Francis Oxford University Press Cambridge University Press American Chemical Society
£,. £. £. £. £. £.
£. £. £. £. £. £.
.% .% .% .% .% .%
December April December March April December
Several publishers, such as Springer S&BM, Sage, Nature Publishing Group, and Emerald, do not make financial records available to the public because they are in private ownership. Nevertheless the drive for healthy financial returns for
Investors versus Customers
67
these privately held companies also exists. It is apparent that all the commercial journal publishers are making profits way over other industry sectors. Also, that the commercial publishers are generating greater profit margins than the university presses and learned societies. THE BIG FOUR In merging with Springer, Macmillan joins forces with one of the world’s largest scholarly publishers, by journal volume.
Number of journals in portfolio*
3,500 3,000 2,500 2,000 1,500 1,000 500 0
Elsevier
Springer
*Some journals closed or merged
Wiley
Taylor & Francis
Macmillan
Source: Richard van Noorden, Nature News, 15 January 2015
Fig. 7.5: Concentration within the STM journal publishing industry
Librarians have concerns about such concentration of publishing resources. As Rick Anderson, Associate Dean for scholarly resources and collections at the University of Utah in Salt Lake City, suggests: Publishers are fielding more and more submissions and chasing smaller and smaller budgets while also dealing with an increasingly complex scholarly communication environment. I think more consolidation is inevitable. This merger [Springer and Nature] might mark the beginning of a trend of joint ventures in scientific publishing.
By early 2014 Elsevier Science was being targeted as a villain of the piece. Its almost 40% operating profit margin has only being overlooked from intense public scrutiny because of the small scale of the industry sector (and the still strong support which authors give to professional publishers). Nevertheless, in their 2013 annual report, Elsevier highlighted that 11 million scientists downloaded 700 million articles through its ScienceDirect service, which included over 2,000 journals and 26,000 digitised books. As such Elsevier is able to use its dominance to provide a number of value-add services such as SciVal, Scirus in
68 Chapter 7
A Dysfunctional STM Scene?
addition to relying on its books and journals for its ongoing income stream, and thereby divert attention away from the high profits from its journal programme. Publishers have the unenviable task of achieving a balance between being too successful on the one hand (antagonising the library community) and not successful enough (antagonising its investors and shareholders). Maintaining a moral and commercial balance in satisfying both audiences is increasingly difficult. There is outrage in several of the industry moderated lists – such as LibLicence, GOAL, open-access-request – about the extent of private-sector publishers’ successes in what many claim to be a ‘public utility’, namely STM information. It is unclear whether the moral outrage is focused just on the fact that commercial journal publishers earn high profits from a public resource (research outputs), or whether it stems from their control over access to their product. The latter concern is also gaining support. Evidence grows of ‘double dip’ pricing for the same journal article; for APC payments hidden behind paywalls; for breach of rights through the central Rightslink service; through mislabelling open access licences, etc. Several leading journal publishers are accused of such activities. The groundswell of concerns about the dysfunctional nature of scientific journal publishing has reached significant proportions. A study was conducted by CIBER, on behalf of the Research Information Network (RIN) among others, and published in 2012 (Rowlands & Nicholas, 2011) entitled ‘Access to scientific content: Gaps and Barriers’. Nearly half the 2,600 respondents to the survey registered that they faced difficulty in accessing the full text of journal articles on ten or more occasions in the previous twelve months. Despite this, only 5.4% of the respondents in universities felt that access to journal articles was ‘poor’ or ‘very poor’, though this rose to 19.8% for those knowledge workers in SMEs and 22.9% for researchers employed in manufacturing. This differential is significant and reflects the extent of the problems the knowledge worker sector has in gaining access to scientific literature. The table shows, for example, that conference papers are important but not easy to access, whereas clinical guidelines are not so important but are relatively easy to access. Faced with barriers to access, the frequent response was “simply to give up and find something else” which does not augur well for efficiency and productivity. The study also pointed out that: There are around 1.8 million professional knowledge workers in the UK many working in R&D intensive occupations (such as software development, civil engineering, consultancy) and in small firms who are currently outside of the subscription arrangements. The needs of this sector of the economy demand greater policy attention. (Rowlands & Nicholas, 2011).
3.3 Patents
53.9
93.1 6.5 93.7 Research Review articles papers
6.9
74.5
33.0 Market research
2.8
Books monographs
57.7 Conf papers
5.5
5.9
4.6 Tech reports
71.5
43.8 PhD theses
4.9
3.2 Legal info
61.5 60.1
3.7 Standards
Source: Rowlands et al., Access to scientific content: Gaps and Barriers. CIBER, 2012
Not at all important =1
Extremely important =7
4.4 Trade pubs
64.7
4.0 Tech info
60.9
3.4
78.4
Clinical guidelines
Reference works
5.6 77.0
3.9 47.4 45.7 Training 3.7 materials Research Archival data records
60.3
4.9
easy or fairly easy to access (%)
0
10
20
30
40
50
60
70
80
90
100
Investors versus Customers
Fig. 7.5: Gaps and Barriers to STM access (2012)
69
70 Chapter 7
A Dysfunctional STM Scene?
Valley of Death Meanwhile, publishers face a further challenge. An analogy can be drawn to illustrate this dilemma. The traditional model of scientific publishing is categorised by the dominant use of printed technology going back several centuries. But this is being eroded by new ways of communicating research output (see chapter 17 on Future Communication Trends). As such the analogy is that printbased publishing is on a downward slope of a valley. On the other hand, the Internet and digital publishing has created totally new ways of disseminating information, and these processes are increasing. This can be reflected in the upward slope of the valley. At the bottom of the valley, the ‘valley of death’, is where the two cultures collide. The challenge facing publishers is to take the best and most durable from the print culture, the down-slope, and mix it with the best of the up slope technologies in a way which gives them a viable and sustainable longer term business strategy. This is implicit in meeting the investors’ longer term support for the industry – if the valley of death is not traversed in a logical and commercially viable way then their investment funds will be diverted elsewhere. Publishers are on their own on making this transition. They cannot count on the support of other stakeholders in the scientific communication process. In many ways they have abused their commercial power and driven short-term gains higher at the expense of forging partnerships with other established intermediaries to meet future challenges. This includes many authors, users, librarians, funders and policy makers. PRINT
GR
OW TH
IN D
T RIN IN P
IGI T
E LIN DEC
ISA TIO
N
DIGITAL
“VALLEY OF DEATH” CULTURAL SHIFT
Fig. 7.6: Valley of Death
Mutual Mistrust
71
Mutual Mistrust The frustration gap, serials crisis, and price gouging are not the only stress points between publisher and library relationships in recent years. The atmosphere of conflict and confrontation has existed for many years, and was made worse more recently by bitter wrangling over issues such as: – Document Delivery particularly in the 1980’s and early 1990’s – Interlibrary Loans particularly in the 2000’s – Open Access as a business model (ongoing) – Support for Institutional Repositories (IRs) just beginning – Derivative works as an acceptable research procedure by third parties (under discussion) – Double dipping (getting paid to make articles freely available, but also charging for them under subscription) – Orphan works and grey literature ownership – Text and data mining – a particular bone of contention in 2012–14 – Role of dataset collections and their curation. Surrounding all the above is a shroud of concern over copyright infringement and intellectual property rights. The issue here is what rights the author is able to have in distributing his/her own publication to others. It is mirrored in what the librarians feel able to do with the publications it has purchased or licensed. Publishers demand rights over the published works to compensate for the costs incurred in making the raw manuscript suitable for global distribution. This is an ongoing point of conflict within the industry. It has furthermore spilled over into questions about the validity of legally sanctioned activities (in some countries) such as e-Legal Deposit. This raises questions about the provenance of agencies able and willing to provide a curated digital record for posterity of scientific research output. Publishers are not seen as being able to do this. The mistrust between the leading stakeholders also threatens to undermine efforts to apply new technology, new business models, adopting new paradigms, and to generally ensure that the scientific communication process in an electronic/digital environment is made more efficient and effective, or become more evident in recent decades. The structure is in a state of flux and change. The latest significant element creating tensions is the rise of digital publishing systems. It has led to new services being introduced by new players to meet emerging information needs of society in general for scientific research material. Not all the new services have
72 Chapter 7
A Dysfunctional STM Scene?
been successful. It is a feature of this industry – unique within publishing in many respects – that innovations have been slow to be adopted. In part this is because of the mismatch between what the new information technology is capable of delivering and an understanding by publishers of market needs and conditions of users. There has been little commitment to using sophisticated market research techniques in reaching an understanding of the changing needs, cultures and sociology of particular sectors within scientific publishing. New services have, by and large, been launched on a whim and a prayer rather than as a result of systematic collection of quantified market evidence. Much initial ‘hype’ has coincided with launching services which, on paper, might appear laudable and viable but in practice miss out on pushing the right buttons to become successful. Human, behavioural motivations tied in with estimates of market sizes and numbers of practitioners/users in relevant areas, have been assumed rather than evaluated. Professional approaches to market research and product investigations have, with few exceptions, been missing. A U.S.-based research consultancy, Gartner, has addressed the various phases which a new service goes through before it gets adopted. The model which was developed to illustrate these phases was the Gartner Hype Life Cycle (Gartner, 2014). However, before looking at the phases of the hype life cycle, it would be better to complete the analysis of the problems and challenges facing the publishers and librarians in meeting the demand of the market for an effective distribution system of research output.
Excess of Information A crucial barrier facing knowledge workers is that, for all the benefits conferred by information technology and the communications revolution, it has a darker side. It is ‘information overload’ and its close cousin ‘attention fragmentation’. As the volume of knowledge grows, research effort tends to become more specialised. As a result, new disciplines and sub-disciplines evolve and spawn new journals and conferences, further stoking the gaps between what is needed and what can be afforded (see the twigging phenemenon in chapter 5). More recently, information also comes in a growing variety of new platforms, forms and formats, many of them digital-based, each demanding attention. There is the growth of both content and format which together creates information overload – content overload has been made worse by channel overload. Whereas in the pre-digital era, the physical library was the first and possibly only port of call for accessing research information, today’s information
Excess of Information
73
environment is characterised by complexity – library systems, search engines, publisher and third party platforms, plus a plethora of new social and other media. Access is required not only to the published full text but also associated datasets, software, lab notes, video links with text, etc., which means that the landscape is rich and varied, claiming greater attention on the fixed but limited time window available to researchers. A particularly poignant analysis of the impact of information overload on the capacity of the individual to cope has been made by Nicholas Carr in his book “The Shallows” (Carr, 2010). In this he builds on an earlier notion that “Google is making him stupid.” By taking away the ability to think deeply – as in reading a book – the new search engines and new media change the synapses in one’s brain to the extent that attention span and memory suffers. Psychology may come to the rescue of the individual researcher as he/she becomes inundated by print and increasingly digital information sources. The mind adapts to the challenge by switching off certain links within the brain, as they become underused, and new channels open up as the researcher adapts to the overload. It means there could be significant behavioural patterns emerging to enable people to cope – patterns and habits which did not exist in an era when the printed article and journal subscription reigned supreme. For centuries scholars have grappled with the challenge created by an excess of published information over the capacity of individuals to read and absorb the relevant bits. It is not a new phenomenon, though the extent of it has become pronounced. There are many ways which individuals have used to come to terms with such overload issues, ranging from proactive attempts to select published items of relevance prior to receipt, through to switching off and ignoring the formal publishing process entirely. In many respects it is the latter process which has dominated the wider knowledge worker audience. They have switched off for a variety of reasons, but primarily because the scientific publishing system has not been suited to their practical needs. Other ways of tackling ‘information overload’ have included the following.
Adaptation through Multi-tasking One suggestion is that the information overload issue could be overcome as and when the new generation of digital users adopt multitasking as a way to get control over the flood of information. As reported in a McKinsey Quarterly (Dean & Webb, 2011), the assumption that multitasking can overcome some of the difficulties created by information
74 Chapter 7
A Dysfunctional STM Scene?
overload is misleading. Multitasking, it is claimed, makes human beings less productive, less creative and less able to make good decisions. The brain, it was suggested, is best designed to focus on one task at a time – by allowing oneself to be distracted by the range of emails, bulletin boards, RSS feeds and other intrusions, such multitasking is seen as procrastination in disguise. Though humans can become quite addicted to multitasking, in laboratory settings researchers show higher levels of stress hormones (Shellenbarger, 2003). However, Don Tapscott is more positive about how the Net Generation will cope with the information overload through multitasking (Tapscott, 2008). Looking at how his children are able to do a variety of information tasks all at the same time suggested that there have been some cognitive brain adaptations made which the new generation, growing up on a diet of online systems and interactive games, have created.
Adaptation through Filtering The solution in the corporate world, as advocated in the McKinsey report (Dean & Webb, 2011), is to incorporate a combination of ‘focusing, filtering and forgetting.’ A good filtering strategy is critical which makes use of a day-to-day information support structure. It also requires some quiet space which enables the individual to contemplate – and not be overrun by the information overload problem. This is the basis on which the current scientific publishing structure rests – that only it, with its in-built refereeing services, is able to provide the effective selection, filtering and quality control, which users of all types need in order to come to terms with the information overload problem. However, whether this filtering process meets the conditions required in an Internet environment is a moot point – the key issue remains that information overload creates a barrier, and needs to be addressed if the goal of embracing a wider audience of UKWs is to be achieved.
Adaptation with Technical Aids By the same token there are also technical aids coming along. The complication is that although we have ever faster computers, they have not been designed to process and understand information which is mostly in human-consumable formats. This means we are not getting the full value out of our scientific, medical, and technical endeavours. Services such as Watson have been introduced to help resolve this – Watson being a computing exemplar. Wolfram
Some Publishing Myths
75
Research has also created several actionable data technologies, particularly WolframAlpha.com and the Computable Document Format. The new focus is on enhancements that will greatly extend the ability to make all sorts data files readily actionable with minimal work from data producers and curators. Cognitive computing refers to the development of computer systems modelled after the human brain. Originally referred to as artificial intelligence, researchers began to use the term cognitive computing in the 1990s, to indicate that science was designed to teach computers to think like a human mind, rather than developing an artificial system. Cognitive computing integrates technology and biology in an attempt to re-engineer the brain. Cognitive Computing more generally is making inroads into computer analysis of information outputs.
Some Publishing Myths Publishing is also facing a number of operational challenges which could impact on how the industry is assessed or valued by the financial/investment sector. There are some disturbing clouds on the horizon. These clouds are driven by a combination of external forces. All are coalescing in taking to task some of the myths and assumptions which have held STM publishing in high esteem by industry watchers in recent years. ‘Content is King.’ An assertion which some publishers have made is that ‘Content is King.’ As will be pointed out later, this is being questioned as Context, a much broader concept, takes over. Ownership of the published report no longer confers on the publisher a substantial asset which can be exploited ad infinitum. There are other supplementary forms and formats which are coming to the fore (see below) Books and Journals will Survive. Packaged books and journals have each served the researcher well in a print-based environment, both as author and as reader. However, as the Internet takes hold, there are wider issues to consider. Users seek personalised information delivery in formats which no longer bear any resemblance to the journal or book. Discourse is increasingly being conducted through email, Skype, teleconferencing, blogs, wikis, webinars, and at conferences, actual and virtual. Raw data collected in previous and related experiments are being sought out as primary sources of research information. New Alternative Formats will never catch on. There is evidence that many specialised areas of science have already migrated away from a static, text-based information system to a rich, dynamic network of new information formats which includes datasets, text and data mining, mashups, personalisation of information
76 Chapter 7
A Dysfunctional STM Scene?
and informal exchange through moderated lists. All expand on the channels as well as enabling more content to be made available. Library is a Permanent Edifice. Another assumption is that the physical structure of the library will remain intact. In a digital world the need for a bricks and mortar to store and curate large amounts of printed material becomes redundant. Information dissemination is increasingly virtual, and researchers in particular are – anecdotally – no longer willing to trek to a remotely located building to meet their information needs. In some corporate R&D centres, the library is being outsourced to organisations such as the British Library. In other areas, the library is adding on additional services such as coffee bars, resource centres, seminar rooms, etc, to entice researchers and academics within its portals. The old library is changing as new roles and functions are added. Copyright Remains Unchangeable. A further assumption is that copyright will remain as a protective device for publishers ensuring that the material they publish is theirs in perpetuity. Besides the issue of archiving and preservation, which are themselves not insignificant challenges, there are rumblings that the results from publicly-funded research may become part of the ‘commons.’ Services such as Creative Commons and Science Commons are doing much to transfer ownership of published results back to the author and/or their funding organisation. This takes away some of the power base of publishers who are now being given a ‘licence to publish’ rather than ownership over the final publication. Peer Review and Refereeing is Inviolate. The backbone of the STM publishing system is also under scrutiny as the ‘wisdom of crowds’ come up against the few (in some cases overworked) experts who provide refereeing as part of their support for their scientific ethic (Ware, 2009). There is a growing clash between the ‘dynamic or live document’ which supports a real-time and interactive approach to scientific communication, and the ‘minutes of science’ which provided the building bricks on which science has been based.
The Assault on the Publishing Industry There is a combined onslaught on the foundations of STM publishing as we now know it from technological developments, sociological trends and administrative changes. To reiterate what was said at the beginning of this book – there is a ‘perfect storm’ building up, arriving from a number of directions. Why is there a fixation in some quarters that the publishing system is being hailed as inefficient and inequitable? The advantages of moving away from the
Market Survey: Outsell
77
old publishing system to one which is heavily ingrained in the Internet and digital publishing is powerful for all except the STM journal publishers and research libraries. Yet the traditional journal publishing model remains the main information delivery vehicle. There may be an answer in the writings of economics laureate Daniel Kahneman who suggests that there are other motives besides efficiency in determining action. Kahneman in “Prospect Theory: An analysis of Decision and Risk” (Kahneman & Tversky, 1979) suggests that ‘losers’ always fight harder than ‘winners’ to protect their interests. This means that it will always be a harder struggle to effect change than to preserve the status quo. In addition, there is also a strong conservative element in scientific research and publications. The various collaborative projects are often built on the framework of producing records of science in article publications. Such conservatism helps to attract contributors who are willing to use unconventional means such as blogs to more effectively achieve a conventional end (writing a scientific paper). But when the goal is not simply to produce a scientific paper – as with GenBank, Wikipedia and many other such tools – there is no direct motivation for scientists to contribute. The will of the end user, the researcher, to change is also lacking. Several market research organisations have undertaken holistic studies on the size, structure and trends facing scholarly publishing. The results of one of the key information research consultancies which has an established and recognised pedigree in this area is given below.
Market Survey: Outsell Outsell is a US-based consultancy and market research organisation which acquired a UK presence through acquisition of Electronic Publishing Services (EPS) Ltd in August 2006. Outsell is a specialist company in analysing information industry developments, and has pioneered quantification of many sectors of the global publishing industry. The figures which they make available to their clients have been used in this book’s focus on the STM publishing sector’s industry size and market structure. The Outsell report shows the interconnection between parts of the Information Economy and yet also highlights the segmentation into distinct sectors, each with their own culture, tradition and financial robustness. Scientific publishing is only one small part of the overall information industry, but perhaps one of the more profitable parts. The international Information Industry size measured in revenues was estimated by Outsell in 2009 to be US$367 billion. This was 8% less than in 2008
78 Chapter 7
A Dysfunctional STM Scene?
and is a reflection of the turmoil many sub-sectors within the information industry experienced as a result of the economic difficulties which beset the world’s financial community in 2008/9. Newspaper publishers suffered particularly badly. Print-based advertising was also badly hit. STM publishing fared better than most other sectors because it is less dependent on advertising, and public R&D investment was not switched off – it is a sector slow to respond to political and economic volatilities (Outsell, 2009). As mentioned in the section on Definitions, scientific publishing includes all science, technology, engineering and medical publishing. Social science and humanities face a different set of challenges. The larger STM publishers are active in most scientific areas. However, it is also the nature of scientific publishing that there are particular areas of specialism, with some players confining their activities to a narrow range of disciplines. This is particularly true of learned society publishers. There are also strong cultural differences between disciplines which often protect smaller publishers in particular, plying their publishing trade in serving unique and specialised niches. The main elements of the Information Industry, According to Outsell’s ‘2009 Information Industry Market Size, Share and Forecast Report,’ are shown in Tab. 7.3: Tab. .: Information Industry Sectors
Professional Information Services Scientific, Technical & Medical Search, Aggregation, Syndication Legal, Tax & Regulatory Education & Training Trade Information Services BB Trade Publishing Yellow Pages Directories Company Information Credit & Financial Information HR Information Market Research reports IT and Telecoms research Newspapers News providers Total Global Information Industry
Estimated Revenues in (in $millions)
Estimated Growth or Decline over
$, $. $, $,
+.% +.% +.% +.%
$, $, $, $, $, $, $,
–.% –.% –.% –.% –.% –.% –.%
$,
–.%
$,
–.%
Source: Outsell’s Publishers and Information Providers Database (a private subscription service)
Market Survey: Outsell
79
The above table shows that the Scientific, Technical and Medical Information sector represents a small share – approximately 6.5% – of the global information industry. Although each of the sectoral figures given above would suffer from the ‘long tail’ concept – uncounted revenues from the many small information providers within each category – the net effect nonetheless is that STM is a minor element within the global information industry. Overall, Outsell believed that the conditions facing the STM publishing sector were such that the growth of 1.2% (2008 to 2009) would accelerate to 2.3% growth in 2010 (actual growth was 4.3%), then 3.5% in 2011, and a 6.4% growth would be achieved in 2012. This is half the growth rate achieved in 2007 (when the annual revenues of the industry sector grew at 12.7%) but is nonetheless a reflection of Outsell’s optimism that the conditions are sound and markets remain healthy. This study will look into the validity of these assumptions in later sections. The next chapter describes some of the comments made in the trade press, social media and formal publications by ‘experts’. These are individuals selected for their awareness of aspects of the communication change which warrants some attention. There is almost universal acceptance of growing tensions within scientific publishing – that it is becoming dysfunctional – which impacts on the future structure of scholarly communication. Releasing the inherent tension within the current STM publishing system could mean that knowledge worker needs are taken into account in emerging strategies and paradigms for scientific research outputs even though few of the following commentators address this issue specifically. They are more concerned with the industry context.
Chapter 8 Comments on the Dysfunctionality of STM Publishing Introduction The past few years have seen prominent industry-watchers giving their views on the current state of STM publishing. They have mainly taken the moral high ground, looking at the political, social and economic consequences to society resulting from the continuation of the present publication system. They have generally been critical.
Michael Nielsen “Is scientific publishing about to be disrupted?” was a question raised by Michael Nielsen in a blog on June 29th 2009 about the future of STM publishing (Neilsen, 2009). His premise was that there are a number of industries which have been sidelined because they were structurally unable or unwilling to cope with the new economics facing their particular industry sectors. He cited the print newspaper industry, music and mini computers as examples. The leaders of these industries were not, he claims, either stupid or malevolent – it is because the underlying structure of their industry, primarily their scale of operations, was unsuitable for new and emerging market conditions. The traditional highly expensive (in terms of overheads) newspaper publishers could not compete in cost terms with the flourishing grass roots Twittering and blogging masses. And quality is not an issue as many of the world’s most esteemed writers are part of the blogging set. The immune systems of the newspaper, music and mini computers were protective of a standard organisational structure and this ran counter to the openness and demands for free information which has emerged on the back of the technological revolution. Nielsen asserted that scientific publishing is about to face the same disruption. He claimed that large publishing houses will have to compete with new companies which focus on meeting specific new digital demands on the information industry. In effect he claims the large traditional publishers will have to traverse ‘the valley of death’ to survive. He pointed out that most senior positions in the largest scientific publishing houses are not held by technologists. Most have strong business or editorial
Zoe Corbyn
81
skills. He claimed that in ten to twenty years time “scientific publishers will be technology companies. Their foundation will be technological innovation and most key decision-makers will be people with deep technological expertise”. He suggests there is a flourishing ecosystem of start-ups in scientific publishing that are experimenting with new ways of communicating research, radically different in approach from journals. They are better prepared to cope with a change in techno-market conditions, and emerging democratic trends, than those publishers wedded to the elitist principles. Lessons can be learned from the new giants that have emerged on the information scene and made money in what appears to be a free and open industry sector companies such as Google, Amazon, Apple. The pattern seems to be that by reaching out to wider global communities and by taking smaller individual payments for services provided, the money could come flowing in. Lots of smaller payments from a much bigger audience is a healthier business proposition than relying on a few customers which complain about extremely high subscription prices. As identified by Nielsen, the immune system for scientific communication is strong in protecting traditional publishing formats and systems. The question is whether their existing scale of operations will be sufficient to sustain them given the economic, financial, social and technological challenges they face.
Zoe Corbyn This theme was also included in an article by Zoe Corbyn in The Times Higher Education Supplement (13 August 2009) which described some of the challenges facing scientific publishing. She quoted a researcher from Cambridge who suggested that “the hegemony of the big journals has enormous effects on the kind of science people do, the way they present it, and who gets funding”. She also claims that this is unhealthy for science. Corbyn referred to the need by authors to get published in the most highly cited journals, and if this was not possible, to eventually find a lesser journal within which to publish their findings. This raises the issue of metrics in determining which journal has the higher citation or perceived quality. This emphasis on metrics has become important – according to Nobel prize winner, Sir John Sulston from Manchester University, “(Journal metrics) are the disease of our times”, and Richard Horton, editor of the Lancet, claimed that journal metrics are ‘divisive’ and argued that it was outrageous that they should figure at all in the next Research Evaluation Framework (REF) in the U.K. Peter Murray-Rust, a Reader from Cambridge University, has acknowledged that metrics are part of the journal system but claimed the real problem is that
82 Chapter 8
Comments on the Dysfunctionality of STM Publishing
they are produced from the private sector (Thomson Reuters). His suggestion was for higher education itself to control the use of journal metrics if it wants to determine its own destiny. There was a reaction on the Listservs to the various points made in a Times Higher Education Supplement article from Phil Davis, formerly at Cornell University and more recently an independent consultant. In particular he claimed that most people did not understand what citation metrics mean and that there is a requirement to protect the masses (of scientists) from their own naiveté. There is an assumed lack of alternatives to citation counting and ranking. “What is missing from all of the rants is why citation metrics and associated journal prestige are so appealing to readers, authors, editors, publishers, academic review boards, and grant allocating agencies, and why – in spite of their known limitations – do we still use them. Blaming the system is not a good enough explanation”. He suggested that while we fear that others do not understand the limitations of citation metrics, he has not met one librarian, publisher, or granting officer who claims they make decisions solely based on these metrics. This may or may not be true in the round. Corbyn suggested that learned societies should take on stewardship over the scientific communication system, although it was recognised that the distinction between commercial and learned society publishers was becoming increasingly blurred. Corbyn went on to highlight the problems of copyright and in particular the barrier this puts in the way of undertaking text and data mining. Though she claims some publishers are allowing flexibility in this area, it was by no means universal. The Wellcome Foundation, funder of much global biomedical research, was attempting to break down the barriers which publisher’s policies had in place limiting text and data mining from published results but had faced strong resistance from publishers who feared that ‘derivative works’ would create a forum over which they had no control or from which they received no financial returns.
George Monbiot A powerful indictment of scientific publishing was made a couple of years later by a writer for the The Guardian newspaper. According to George Monbiot, in an article in The Guardian on 29th August, 2011 (Monbiot, 2011), it is not possible to recognise the picture of flexible, rapidly reactive large commercial publishers rushing to embrace the new millenium. There has been, according to Monbiot, a lack of leadership from global publishers in switching from the traditional subscription model to new untested ones. This is because the commercial risks
George Monbiot
83
involved are unknown and unpalatable. Why throw away a regular and stable almost 40% gross margin on a serial subscription service in favour of something a lot less? As suggested earlier, their shareholders would be up in arms. This has led to many seeing the scientific publishing industry as being greedy and non-responsive to developing market needs. According to Monbiot, “who are the most ruthless capitalists in the western world? Whose monopolistic practices make Walmart look like a corner shop and Rupert Murdoch a socialist?” His vote goes not to the banks, the oil companies or the health insurers, but instead to scientific publishers. “Of all corporate scams, the racket they run is most urgently in need of referral to the competition authorities”. “Everyone claims to agree that people should be encouraged to understand science and other academic research. Without current knowledge, we cannot make coherent democratic decisions”. But according to Monbiot, “the publishers have slapped a padlock and a ‘keep out’ sign on the gates”. This is at the heart of this present study – the need to empower a wider community to make effective decisions based on information and knowledge and not see such resource being locked away behind tariff walls. He goes on: “One might resent Murdoch’s paywall policy, in which he charges £1 for 24 hours of access to The Times and The Sunday Times. But at least in that period you can read and download as many articles as you like. Reading a single article published by one of Elsevier’s journals costs $31.50. Springer charges €34.95, Wiley-Blackwell, $42. Read ten such articles and you pay 10 times. And the journals (publishers) retain perpetual copyright. If the researcher wants to look at a printed letter from 1981 that can cost a further $31.50”. He agreed that it is possible to go to the local research library. But they too have been hit by budgetary constraints. “The average cost of an annual subscription to a chemistry journal is $3,792. Some journals cost $10,000 a year or more to stock. The most expensive primary research journal is Elsevier’s Biochimica et Biophysica Acta, at $20,930.” Though academic libraries had been frantically cutting subscriptions to make ends meet, journals now consume 65% of their collections budgets, which means they have had to reduce the number of books they buy, and budgetary pressures are being exerted on staffing and facilities including storage. Journal subscriptions account for a significant component of universities’ costs. In addition, not everyone is able to make use of the local research library. Unless one is affiliated with the library in a recognisable way – as a student or member of staff – the terms of the licensing agreement between publisher and research library is such that that they would be turned away if seeking access to electronic journals. Monbiot laments on the fact that academic publishers get their articles, their peer reviewing (vetting by other researchers) and even much of their editing done for free. Also, he states that the material they publish was commissioned
84 Chapter 8
Comments on the Dysfunctionality of STM Publishing
and funded not by them but by the tax paying public, through government research grants and academic stipends. But to see it, the general interested public and much of academia must pay for it again. More importantly, universities are locked into buying their products. Academic papers are published in only one place, and they have to be read by researchers trying to keep up with their subject. Demand is inelastic and competition non-existent, because different journals cannot publish the same material. In many cases the publishers provide inducements for libraries to buy a large package of journals, whether or not they want them (‘Big Deals’). Publishers claim that they have to make these charges because of costs of production and distribution, and that they add value because they “develop journal brands and maintain and improve the digital infrastructure which has revolutionised scientific communication in the past 15 years”. But an analysis by Deutsche Bank reached different conclusions. “We believe the publisher adds relatively little value to the publishing process … if the process really was as complex, costly and value-added as the publishers protest that it is, 40% margins wouldn’t be available” (Monbiot, 2011). Far from assisting the dissemination of research, the big publishers impede it, as their long turnaround times can delay the release of findings by a year or more. “What we see here is pure rentier capitalism” – monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism, claims Monbiot. “To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning”. However bad the situation is for academics and researchers, it is far worse for the laity. Independent researchers who try to inform themselves about important scientific issues have to fork out thousands of pounds sterling. It appears to contravene the universal declaration of human rights, which says that “everyone has the right freely to … share in scientific advancement and its benefits”. This is an important mantra around which this particular book is built. Empower the many with the results of society’s scientific progress (and not keep it locked away for the wealthier institutions) offers an attractive social scenario. Open Access publishing, despite its promise, and resources such as the Public Library of Science and the physics database arXiv. org (see chapter 25 on Open Access) have failed to displace the monopolists. The reason is that the big publishers have rounded up the journals with the highest academic impact factors, in which publication is essential for researchers trying to secure grants and advance their careers (see above). You can start reading Open Access journals, but you can’t stop reading the closed ones. It was inevitable that this criticism of the scientific publishing industry would be challenged. In one of the leading publisher journals, The Scholarly
Sir William Timothy Gowers
85
Kitchen, on 1 September 2011, Kent Anderson in the U.S. claimed that the arguments put forward by Monbiot were ‘uninformed, unhinged and unfair – the Monbiot rant’. They represented nothing more than a ‘rant’ against the existing publishing system (Anderson, 2011). Others who are closer to the publishing industry feel that Monbiot has a jaundiced view of the commercial STM publishers and is too extreme in his arguments. Nevertheless, there are many who agree with Monbiot’s case.
Sir William Timothy Gowers One expert who agreed with the main thrust of Monbiot’s arguments is the eminent Cambridge Professor, Sir Tim Gowers. He saw Elsevier as the arch villain in draining profits from the science budget into the hands of financiers. Its 38% margin on its operations makes it a star, equivalent in some eyes to the successful IT companies such as Apple and Google. On 21 January 2012 Gowers penned a blog which reignited a campaign for authors and readers to boycott Elsevier publications. Such a campaign by the research community was not new. There was an earlier outcry led by the equally eminent Michael Eisen in the U.S.A. which led to the formation of the Public Library of Science (PLoS), a key publisher of Open Access only journals. 34,000 signatories were collected in Eisen’s campaign in the U.S. Within weeks the U.K.-based Gowers had 9,000 scientists around the world signed up to a petition pledging to refrain from editing, publishing or sponsoring articles in any of Elsevier’s over 2,000 journal titles. The stimulus for the campaign from this ‘thoughtful academic’ (The Sunday Times, 19 February, 2012) was partly the extraordinary high profits from the largest player in the scientific publishing world, and partly from the effects which the economic downturn was having on science budgets, including libraries. Gowers claimed that publishers such as Elsevier were ruthless in cutting off journal supplies to the captive market they serve – research libraries. In particular there were barriers preventing attempts to negotiate better deals on the package of journals within their portfolio of ‘Big Deals’. This included preventing librarians discussing and comparing the terms each library had negotiated with the publisher under pain of legal sanctions being imposed. According to Gowers, the Internet is potentially undermining the stranglehold which journal publishing has had in that new forms of communication are being opened up, relegating the published journal article to that of being a version of record (VoR). “Interesting research gets disseminated long before it
86 Chapter 8
Comments on the Dysfunctionality of STM Publishing
gets published in official journals so the only real function that journals are performing is the validation of papers”. Given that published articles are no longer a communicator of the progress of science it seemed, to Gowers and his colleagues, a travesty that Elsevier should have earned £768 million for its private investors in 2011 from its involvement in the public scientific arena. Whilst authors were distanced from the commercial activities of the publishing giants, and readers were separated from the purchasing decision by the research library and their collection development practices, the status quo would be maintained. Gowers’ call to action was an attempt to highlight the dysfunctionality of the industry, with the conflict between the freedom and openness of science and the Internet clashing with the profitability targets set by the owners of publishing conglomerates. In a blog dated April 2014, in which he again challenged the inexorable rise in Elsevier profits (to 39% of revenues by then), Gowers took the issue further. “I have come to the conclusion that if it is not possible to bring about a rapid change to the current system, then the next best thing to do, which has the advantage of being a lot easier, is to obtain as much information as possible about it.” The problem he faced is that publishers such as Elsevier required nondisclosure by librarians of the ‘preferential deals’ they were being given as part of their signing up for licences to ‘Big Deals’. Because of lack of pricing transparency no-one knows how much money is being drained from the academic university budgets (either from research grants, or indirect money received through HEFCE grants or student tuition fees) to the financial coffers of for-profit publishers. More openness on pricing issues was demanded by Gowers about information which is being held behind closed doors within university libraries and Jisc (Gowers, 2014). For example, Elsevier alone was charging £14.4 million to 19 universities in the U.K. as well as gaining literally millions more from the other 100 or so universities in the country. They are also gaining millions of pounds in APCs (article processing charges, part of the Open Access business model). There are also many other traditional publishers to whom academic libraries pay subscriptions. None of their pricing/subscription and circulation data is out there and subject to public scrutiny. As part of his research into Elsevier pricing, Gowers used the Freedom of Information act to find out just how much was being spent by the key U.K. Russell Group university libraries on Elsevier journals (including their print, Science Direct and Freedom Collections). Non-disclosure clauses, included by Elsevier within the contracts have prevented libraries from releasing this data, and even from discussing the figures with other libraries or academics within their own University. Though a high response rate was achieved, many Freedom
Andrew Brown
87
of Information officers also used the argument that disclosure of amounts paid to Elsevier threatened Elsevier’s commercial interests, a point which was not accepted by Gowers (Gowers, 2014). He invited mathematicians from Cambridge University to give their views on the importance or otherwise of continued access to Elsevier journals as part of their research. He concluded “most people would not be inconvenienced if they had to do without Elsevier’s products and services, and a large majority were willing to risk doing without them if that would strengthen the bargaining position of those who negotiate with Elsevier.” He also pointed to other research in the mathematics area which showed that “a large proportion of articles in various different journals, not all of them Elsevier journals, are freely available in preprint form”. According to a follow-up press release from Research Councils U.K. (RCUK), “Markets work best where there is pricing transparency and price non-disclosure clauses in contracts never work to the benefit of customers. RCUK has long argued against such clauses in journal contracts and we are pleased that all major publishers, with the exception of Elsevier, have abandoned them” (24 April 2014). RCUK stated further “We believe that there is no justification for continuing to base prices on an outmoded delivery mechanism. Prices should be more closely related to production costs, costs that in an online environment are falling. However, rather than see falling prices, libraries have faced inflationbusting price rises over the past 20 years, while large commercial publishers post record profits”. Although complaining about the actions of the market leader is in itself not necessarily an indicator of the dysfunctionality of the publishing system as a whole, it does suggest that there may be a better way, one which would enable all those who stand to benefit from access to research results to be included in a different business paradigm.
Andrew Brown In the February 5th, 2012 issue of The Guardian, Andrew Brown offered another critique of the STM publishing system. He also claimed that scientific journals were a notorious racket. This was because they are essential tools for the professions that use them; and they could charge pretty much what they liked. University libraries, and even others that have any pretence at scholarship, now spend fortunes on learned journals. Elsevier published 1,749 journals in 2011 at an average annual subscription price of nearly £2,400, and each one is indispensable to specialists.
88 Chapter 8
Comments on the Dysfunctionality of STM Publishing
Andrew Brown pointed out that the government paid universities to conduct research for the public benefit. The authors of the research results are paid nothing. The measure of this activity is publication in peer-reviewed specialist journals; the peer review is done for free, by academics employed and paid for by universities. The results are then sold back to the universities who nurtured the research in the first place. “This is bad value for governments. It’s also extremely bad for anyone outside a university who may want to learn, and that’s a situation the web has made more tantalising”. Almost all journals are indexed and references to them can be found on Google Scholar, PubMed Central and other leading comprehensive online data sources. “So the truth is out there. But it will cost you” to get access to the full report. He claimed that the big names in general science publishing – Nature and Science – are slightly less rapacious: one could get a day pass to read a Science article for $10. However, that was still a lot to pay for something you are meant to read for only 24 hours. But an annual subscription to either magazine costs around $200 and the access it gives to archives soon expires. One answer to all this, he claimed, was to promote the growth of free scientific publishing, and also, increasingly, of free access to the immense quantities of data that lie behind most published papers. For those who just want to know what is going on, he argued, Open Access is unsatisfactory for two reasons. The first was that it is not yet widespread enough. There was no guarantee that the interesting work would emerge in Open Access journals, which tend to be extremely specialised. The second was more philosophical, and deeper. It may never become as useful a resource as the paid-for generalised magazines (such as Science and Nature), precisely because the Open Access journals are written to be enjoyed within particular fields of expertise. Nature and Science are not like that. Within any given issue, perhaps threequarters of the contents is impenetrable to any given reader. The fact that they have been printed at all in a prestigious journal tells you that experts think they are important, and there would often be a piece at the front of the magazine explaining in intelligent layman’s terms exactly what the paper says and why it matters. Brown acknowledged that these things are worth paying for. But how are individuals, normal people, to pay? How could unaffiliated knowledge workers benefit?
Richard Horton/Richard Smith Another criticism levied against publishers is that they restrict the ability of authors to deal directly with their peers, the public and the media in disclosing
Daniel Allington
89
results once their publication has entered the publication process. Both Richard Smith (formerly editor of BMJ) and Richard Horton (Lancet) question whether journals truly add much to the scientific enterprise despite the four characteristics of journals as espoused by Henry Oldenburg being as important now as they were in the seventeenth century. Richard Smith was quoted as saying “Journals are getting rich off the back of science without, I would argue, adding much value”, and Horton went further by saying that if the situation continues “journals will die, and deserve to.” These assessments were made by two people who have been very close to the traditional journal publishing system. To survive, Horton claimed, publishers must repurpose themselves to set a strong public agenda around what matters in science, and communicating their concerns back to the scientific community – “right now they are doing very badly at both” he argued. Smith stressed that publishers should embrace the Open Access business model in favour of the subscription model. Failing this, publishers might well see the emergence of what was described by Michael Nielsen (Neilsen, 2009) – that scientists would become more active in self-publishing on the web and create their own blogs. How these would be organised into an architecture which will serve the interests of authors and readers is as yet unclear, but there are a few indications that this may be a direction which scientific communication may take in future.
Daniel Allington Daniel Allington, Professor of Sociology at the Open University, wrote a thoughtful piece about the role of open access in the current scholarly communication process (Allington, 2013) in which he confirmed that he had changed his view about the impact which Open Access schemes are having. Though he initially felt that Green Open Access was an improvement over the traditional toll-based publication system, and made reference to the complaints made by George Monbiot about its dysfunctionality, his opinion was changing in line with the new thought that the wrong questions were being addressed. He felt that Open Access was being proposed as a solution to a range of problems which had little to do with one another. He pointed out that the then Minister of Science in the U.K. (David Willetts) was also concerned, that one consequence of the traditional system was that it perpetuates a system whereby the fruits of research are not disseminated to a wider public despite the funding of research largely coming from the public purse.
90 Chapter 8
Comments on the Dysfunctionality of STM Publishing
Support for the Green Open Access movement ignores the fact that there is an inherent need to fund institutional repositories which means that the financial pain is switched from one institutional account to another. As a representative from Elsevier (Dr Alicia Wise) pointed out in her presentation to a Select Committee enquiry the overall costs of scientific publications would remain the same for both traditional and Open Access publishing as long as there is a need for quality control over what is published. Allington makes reference to an alternative Open Access system which would be an improvement over the current system if the main goal is to improve the reach of published material. That is an insistence by funding agencies to include ‘non technical summaries’ written by the recipients of research grants as part of their research and to have these summaries made freely available on the funding agencies’ web sites. The fact that this does exist in many instances, and is not heavily used, is perhaps a failure in the marketing of the system rather than any weakness in the service concept. It is one which has been proposed by Joseph Esposito as part of a new tertiary publication service from the publishing industry (see later as the Nautilus concept in the chapter 17 entitled Future Communication Trends). Alice Bell, a researcher in science and technology policy, had made a similar point in Times Higher Education Supplement in which she argued that Open Access may lead to clearer write-up of results if there is the impression that a wider audience may read the article (Bell, 2012). However Allington was not convinced and claimed that “To translate a research article from a technical register into everyday English would ….. make it more ambiguous or more verbose”. In either case it would be worse from the perspective of the primary target audience of knowledgeable experts in the field. “Open Access is one thing; expecting researchers qua researchers to take up the task of public education by radically changing the manner in which they communicate among themselves is quite another”. Writing popular science articles is vastly different from reporting on a highly technical research project. It was also highlighted in Allington’s blog that there are vast differences in journal costs between different disciplines, ranging from the highs seen in physics to the lows (almost Open Access) in the humanities. So the problems of scholarly communication should not be generalised too much. It should also be recognised that the problem is as much a feature of under-funding of libraries as it is of high prices set by scientific journal publishers. On the central issue of whether expanding the reach of research output was achievable through Open Access, Allington appeared not to be entirely convinced. He saw the Green Open Access movement as being a positive contribution to the system as it enables the results of research effort to be shared among
Daniel Allington
91
other academics “and not on the grounds that the public is crying out for access to their research publications”. In that respect he may have under-estimated the changes taking place which are democratising the market for scientific output particularly among the growing citizen science communities. He saw the real value of the mandated Green Open Access publishing system as the means to keep track of who wants access to his research. In the traditional publication system the requests for (free) printed reprints sent out by the author met this need; now online requests for the online version through the local institutional repository meets the same requirement. The real challenge facing the Open Access movement is that a substantial number of academics cannot be bothered even to look for free copies online. But is this a major barrier or one which can be addressed by improved search and delivery systems in future? The openness, transparency, interactivity and cooperation which are values offered by the web and the Internet could provide the mechanisms which will oil the delivery of relevant information on time to the relevant target audiences. Change is becoming a powerful force in creating innovative systems to meet a latent demand. As observed by Casey Brienza (2011: 168) “the number of people who might learn from research results is always going to be greater than the number likely to actually seek out what has been written up”. But whether the existing stakeholders in the system are the most appropriate to carry the banner of social communication into the future may be questionable. According to the librarian Rick Anderson “each participant in the system receives distorted and radically incomplete market responses to its inputs”. For example there is virtually no competitive pressure on publishers to control journal prices; librarians’ collection development decisions are not closely related to actual user needs. Allington said that “… the current system is flawed not because journals are over-priced but because….. we do not know what the price ought to be”. Neither was Allington convinced that social media would provide an adequate surrogate for the journal. Relying on Twitter, etc. to provide a secure and reliable platform for scientific communication works against a culture which has been inbred in the research community for centuries. Researchers voluntarily give their time for free to review and referee articles – it is part of their DNA to do so. It is the key to understanding why scholarly communication has been so conservative in its adoption of social media. Meanwhile the debate rages on what may be considered inconsequential arguments about where the funding flows for supporting scientific publishing should come from – Green, Gold or Hybrid Open Access or ‘Big Deals.’ The official U.K. policy (and to some extent the same in the Netherlands) seems to be in support of Gold Open Access, whereas the rest of the world seems to be favour-
92 Chapter 8
Comments on the Dysfunctionality of STM Publishing
ing Green. The power of the publisher lobby in both the main commercial journal publisher countries may have something to do with this split. The consequence is difficult to assess at this stage. What has happened is that the elitism inherent within the traditional scholarly communication system seems to be reinforced by the Gold Open Access movement, whereas the Green Open Access movement pioneers more innovative, less restrictive approaches.
Peter Murray Rust In a blog dated February 2014, Dr Peter Murray Rust, a Reader in Chemistry at University of Cambridge, took issue with the mission which publishers follow and in particular their imposition of barriers to accessing published articles for content mining. He commented: Scholarly publishing industry is almost unique in that it provides an essential service on an unregulated monopoly basis. In other words the industry can do what it likes (within the law) and largely get away with. The “customers” are the University libraries who seem only to care about price and not what the service actually is. As long as they can “buy” journals they largely don’t seem to care about the conditions of use (and in particular the right to carry out content mining). In many ways they act as internal delivery agents and first-line policing (on copyright) for the publishers. This means that the readers (both generally and with institutional subscription) have no formal voice. Railways have to submit to scrutiny and have passenger liaison committees. So do energy providers. Ultimately they are answerable to governments as well as their shareholders. Publishers have no such regulation and operate effective micro-monopolies. Readers have no choice what they read – there is no substitutability. They can either subscribe to read it or they are prevented by the paywalls. If they have access they can either mine it or they are subject to legal constraints (as in this case). Publishers can go a very long way in upsetting its readers without losing market share (Murray Rust, 2014).
Bonnie Swoger According to Bonnie Swoger, in a blog posted on Scientific American (June 18th, 2014), many signs are pointing to an academic publishing model based on the article, not the journal. This did not mean that publishers would disappear, but individual journals might not matter so much in future, according to Swoger. “Even now, articles appear on publisher websites that can make it difficult to tell which journal the article is in”. The predominant branding is typically for the publisher, possibly the platform (e.g. ScienceDirect), and the article. Journal branding had arguably become a minor factor (Swoger, 2014).
Summary
93
Martin Fenner has made a compelling case that the future of scholarly publishing may not even lie at the article level, but rather at an even finer grained level where data, analysis, code and other information products live separate (but closely linked) lives. As the call for greater sharing of research data and code grows, researchers are citing data sets more often and scholars are starting to get credit for publishing high quality data. The business models of for-profit publishers (see chapter 24, Business Models) seems to favour increasing numbers of journals. Creating a new journal is another way to ask for additional subscription fees. But these techniques may not work much longer as open access grows and funders require authors to make publications available. Whilst Ms. Swoger does not write an obituary for the scholarly journal quite yet, the tools currently in place mean that the journal is not necessary, and may eventually be relegated to articles about scholarly publishing history.
Summary Commentaries made by industry watchers such as Neilsen (2009), Monbiot (2011), Gowers (2012), Allington (2013) and Murray Rust (2014) etc., have highlighted the weaknesses in the current STM publishing system, and several have made passing reference to the elitism which is a feature of the system. Few have focused specifically on the potential which exists to ‘democratise’ the STM publishing system through including UKWs as active partners. Without publishers or any other stakeholders taking on the mantle of UKW champions it is unclear what impact the pressures described by the pundits will have on opening up the scientific publishing process. However, a critical analysis supports the contention that they would support a change which would make for a healthier, broader industry sector. The conclusion from this industry-wide assessment of the scientific publishing system is that it is not suitable for a digital age. Its dysfunctionality, combined with the drivers for change which have been recorded, suggest that other options may be preferable. Much depends on how wedded researchers from all generations and all disciplines are to existing formats and industry structures – the dysfunctional nature of the industry notwithstanding; also on how responsive existing dominant stakeholders are in changing their operations.
Chapter 9 The Main Stakeholders Response Strategies Do publishers and libraries remain relevant in their current form during a time when the Internet and electronic publishing systems offer new possibilities and alternatives, particularly as the web and Internet are used so extensively by the research community in conducting research and subsequent communications? It would seem inevitable that the research community would make use of the Internet to disseminate research results swiftly and effectively to a digitallyaware and digitally-responsive audience. New bespoke support services for publishing which are more appropriate in the digital age are emerging (such as Mendeley, ResearchGate, etc). By-pass strategies, using social media and social networking processes to create new ways to disseminate scientific research output, are being adopted. As part of this book, some of these new ways of Internet-based information dissemination are explored as they relate to whether a wider audience can be reached. In the meantime, it is apparent that several of the larger scientific publishers – those which have the resources to be visionary and ‘strategise’ – have expanded into newer areas. Elsevier, for example, made two major announcements in the early days of 2013. The first was its acquisition of Knovel, a New York based database service which integrates technical/engineering data with analytical search tools. This has given Elsevier strength as an information service supporting the engineering professions worldwide through the provision of not only text-based publications but also data, analytical tools, protocols, etc. The second was the acquisition in April 2013 of Mendeley, a U.K. and Germany based Web activity which enables social networking to be undertaken among scientists. By the same token Wiley has also expanded its horizons – in May 2014 it completed its acquisition of CrossKnowledge, a learning solutions provider focused on leadership and managerial skills development. The above indicate that STM publishing is in a state of flux. The confluence of various changes is also having its effects on the other main STM stakeholders. Librarians are moving into new roles as packagers of published information for their local clientele, and claiming back some publishing functions through setting up and running institutional repositories and university presses. Publish-
Response Strategies
95
ers have been trying to disintermediate libraries by using ‘Big Deals’ to expand the number of users able to access their published material. Librarians have seen their collection development function compromised as their choice of journals selected for purchase has diminished as a consequence. Publishers have been challenged by the emergence of new players, often from within the scientific community, with services designed to be more relevant to highly targeted audiences. Funding agencies have pursued Open Access policies to ensure that results of research they have funded gain maximum impact. University administrators, far from becoming ‘tipping point’ agents, have been slow to react, either from ignorance of from lack of awareness on how fast information needs in academia is changing. The traditional clear-cut roles of publishers, librarians and other intermediaries are under scrutiny, as are the products and services that made them important. Researchers are embracing social media to speed up communication about their research work. A leading example is Mendeley, a grass roots research service developed by scientists. It relies on community spirit to improve scientific information. However, it was acquired in April 2013 by the organisation which fears most from the success of such collaborative grass roots projects – Elsevier. Whether Elsevier will stultify Mendeley’s aims in the name of protecting the profitability of the journal publishing business remains to be seen. Another similar development is DeepDyve, a Silicon Valley initiative which is establishing itself as an agency through which individual articles can be bought at a low price, avoiding the need to find these articles among the multiplicity of web sites run by publishers. DeepDyve was developed by innovators and is funded by venture capitalists, not publishers. Despite the rapid changes taking place in the social media sector, libraries and publishers are not for the most part set up to work with and on such risky innovations. There are some untypical services emerging, such as the Polymath Project, GenBank, SDSS, Ocean Observatories Initiative, Allen Brain Atlas, arXiv, etc (see later in chapter 22) . These indicate that there are different ways of doing things. Ways which do not necessarily disadvantage unaffiliated researchers – they are services which embrace and encompass a wider user base. These developments may have the effect of bringing UKWs more into the scientific communication scene in future. A bleak assessment of the future for publishers hinges on their dependency on ‘Big Deals’. As ‘Big Deals’ become increasingly unaffordable to librarians they will break down. However, publishers have survived for many decades despite continuing criticisms of high journal or licences prices by squeezing out
96 Chapter 9
The Main Stakeholders
weaknesses in the library system, such as libraries maintaining duplicates. Those who believe that there remains a strong financial future for publishers – such as the analysts at Exane BNP Paribas who monitor the financial returns of the leading commercial publishers – base their assumptions on there still being a pot of gold within library budgets which remains untapped. Weeding out print in favour of digital is one area that will release additional financial resources, some of which it is assumed will help fund the transition to the new business models that will sustain existing publishers. Another area is to grow with the greater efforts being invested in R&D in developing countries, BRIC, VISTA, etc. Moderated bulletin boards such as GOAL and LibLicence contain many references to how librarians are changing their roles. Some claim this involves switching from being custodians of published literature to proactive servers of their clients’ information needs. As Neilsen comments in his book ‘Too Big to Know: Rethinking Knowledge’: Librarians are enmeshed in a struggle for a workable vision of a future for their institutions, not only debating the merits of new techniques for navigating collections but wondering how to weigh the expertise of the ‘crowd’ against those with credentials (Weinberger, 2012)
Several pundits contend that a decisive role on what will happen in future will be determined by millions of ‘grey’ researchers outside academia who will make their needs known and thereby change the financial models and institutional structures that support future scientific information dissemination. One thing is certain; the next five years will see a significant change in the way scientific communication will take place. Part of this change could be the greater democratisation of the scientific information system, and the greater access to a wider audience of knowledge workers, involving more innovative stakeholders.
Leading STM Commercial Publishers Tab. 9.1 lists the main scientific journal publishers worldwide and an indication of their respective outputs of articles and journals (2010). It is based on the figures given in publication catalogues and pulled together as part of a European-wide study on the use being made of institutional repositories and Open Access (PEER, 2012).
Leading STM Commercial Publishers
97
Tab. .: Significant STM publishers and their journals
BMCentral BMJ Cambridge EDP Elsevier Hindawi IOP PLOS Sage Springer Taylor & Francis Wiley- Blackwell
Number of articles in catalogue for all years
Number of articles published in
, , , , ,, , , , , ,, ,, ,,
, , , , , , , , , , ,
Journals in , , , ,
Source: PEER Economics Research – Final Report, January , ASK Research Centre, University of Bocconi, Milan, Italy (PEER, a).
An indication of support for the traditional scientific publishing business model can be seen in the interest by the investing community. In June 2013 the venture capital owners of Springer Science and Business Publishing announced that they had sold their ownership in the company to a German financial buyout firm, BC Partners, for approximately 3.3 billion Euros (£2.75 billion). This is the largest private-equity acquisition in Germany for seven years. EQT and Government of Singapore Investment Corporation, the previous owners of Springer, had been pursuing both a direct sale and a flotation. Springer publishes 2,200 Englishlanguage journals and more than 8,000 book titles every year. However, all this changed in January 2015 when it was announced that Springer S&BM had joined forces with Nature Publishing Group to create a new organisation comparable in size with Elsevier’s science division. There is still strong commercial interest in established publishing brands. It is sufficient to point out that STM publishers overall are profitable, in some cases very profitable. This, in turn, has led to complaints that they are exploiting a public good for their own ends, and also that they are reluctant to change – that they seek to protect their existing profit margins, and want the status quo to remain.
98 Chapter 9
The Main Stakeholders
Industry Concentration Estimates of the actual numbers of publishers vary, but the top ten publishers account for around a third of all journal titles, while there is a large number of publishers which publish only one or two of the 28,000 English-language scholarly journals published globally. Jan Erik Frantsvag from the University of Tromso published an article in a 2010 edition of First Monday in which he analysed the distribution of publishers of Open Access and subscription-based journals respectively (Frantsvag, 2010). Frantsvag based this on data from the Directory of Open Access Journals (DOAJ) and from Ulrich’s Periodicals Directory. What is striking is that the distribution of publishers by number of titles published was similar across the two sectors – Open Access and subscription-based. From Ulrich’s, he identified 8,566 publishers which publish mainly subscription-based journals, a figure more than four times the estimate of 2,000 scientific journal publishers that is often cited, in for example ‘The STM Report: An Overview of Scientific and Scholarly Journal Publishing’ (Ware & Mabe, 2012). But Franstsvag found that 7,168 of those publishers (83.7%) publish only one journal, and thus account for just under a third of all titles. At the other end of the spectrum, 27 publishers (0.3%) publish more than 100 titles, and account for just under 30% of all titles; a clear example of the ‘core’ and ‘the long tail’ in the supply of published STM material. For Open Access publishing, he identified 3,231 OA publishers. Of those, 2,839 (87.9%) published only one title, while three (0.1%) publish more than 100 titles – a remarkably similar structure to the subscription based publishers. Franstvag’s starting point was an economic one: that an OA publishing environment where single-title publishers predominate tends to suffer from high average fixed costs, and cannot benefit from economies of scale. This serves only to emphasise how journal publishing is characterised by large numbers of publishers operating on a very small scale and probably at less than optimal economic efficiency.
Geographical Location Europe and the United States have been the centres for STM publishing in the past. As a commercial enterprise STM publishing took off after the Second World War when scientific research effort, largely put on hold, was ramped up again. There was need for professional organisations to provide support for the
Geographical Location
99
dissemination of the new research output, particularly after the ‘Space Race’ of the late 1960’s injected new research funds into science. In the United States, learned societies took the lead and provided support for members of their society with the publication of books and research articles as part of society’s membership activities. Inevitably some of the societies with the largest memberships became leaders in the STM publishing effort. In Europe, a different trend emerged. Although learned societies also played a role, a key feature in Europe after World War Two was the rise of commercial publishing houses which provided research support services for authors. Several of these were based in the Netherlands, and supported those authors who had been relocated from war-torn countries. Elsevier, North Holland, Excerpta Medica, and Kluwer were examples of small independent companies which had fostered close and supportive relations with local and often displaced scientific communities. In Germany, Springer Verlag did the same, and a later but equally significant entrant in the United Kingdom was Pergamon Press under the ownership of the controversial Robert Maxwell. The point is that a different flavour of STM publishing emerged in the western world in the 1950s and 1960s, which was to leave its legacy on the commercial policies that were adopted in the two main STM publishing regions of the world. In Europe, commercial publishers pursued active outreach programmes. They were not constrained to a particular scientific discipline or a distinct membership group. They covered all disciplines; they changed the focus of their editorial programmes in line with trends and shifts in research direction and the emergence of new subject areas. They became more comprehensive in their coverage of research outputs. They filled the gaps left by the focused and (discipline) inflexible learned societies. Also, several European commercial publishers adopted the interesting business strategy of allowing their more successful journals to grow in size (in the number of volumes per annum). As the purchaser was charged for each volume, and as each title increased in numbers of volumes published annually, so the subscription price rose in some cases dramatically. By the 1980s it was not uncommon for individual journals with over 40 volumes per annum to be published by the leading journal publisher at that time – Elsevier, which had by then joined forces with North Holland, Excerpta Medica, and then Pergamon Press and Academic Press to finally become the largest STM publisher in the world. But if the basis of this growth was a volatile expansion in journals and their volumes across different subject areas, was there no price sensitivity in the market to limit such expansion? This is where the STM publishing industry is so unique. It has a market that is very price insensitive; it has inelastic demand.
100 Chapter 9
The Main Stakeholders
No matter how high the price charged the likelihood of subscriptions being cancelled is low. This is because no two journals, even in the same field, are identical. The content varies, the editorial scope may be tailored to different aspects of the same field, and the editor may have different expertise, skills and energies. A case in point was the area of brain research in the 1980s. The main commercial publishers (and some of the relevant learned societies) had journals in this area, but one title (from Elsevier) reigned supreme – ‘Brain Research.’ Why? Because the editor at the time had the reputation, esteem, energy and commercial nous to make it expand quickly in a field which was also at that time attracting significant research effort. Elsevier also supported this with strong brand recognition. The editors of the other competing journals were more academic and traditional in their approach. Their journals’ growth and impact were less. The combination of editorial excellence, flexibility in following new disciplinary trends, and a business model which was based on payment – in advance – for the anticipated number of volumes of research output, provided the basis for a flourishing STM publishing business by key European publishing houses. It is a legacy which is still with us. The only thing that has changed are market conditions – libraries locked into buying subscriptions on a continuing basis (librarians do not like to see gaps in their serial holdings) – is now beginning to flag. Libraries no longer have the same robust budgets as in earlier years and so cannot continue with comprehensive purchasing policies in all subject fields. Meanwhile we have a STM publishing industry structure which is split between the American, the European, and the Rest of the World, with the larger US learned societies and the larger European commercial publishers being industry leaders. But whilst there is uniformity in the ownership patterns between North America and Europe, this does not mean that STM publishers always act in concert and to the best advantage of the industry sector. There is a strong element of competition between publishers, one which has in the past reduced their ability to work together in a constructive way. This has its effect in there not being a united industry voice in countering some of the charges being levied against them. The national and international publisher associations (IPA, PA, STM, ALPSP, AAP/PSP) have focused on ensuring the traditional business is not challenged by external, anti-commercial and possibly illegal activities, and not on improving their overall industry brand image. It has meant that neither they, nor their members, have been innovative as a group in pushing forward the boundaries of new communication systems in the Internet world. Their aim is one of protecting what they have rather than risking what might be appropriate, and one of the key aspects to protect is the total net income for the benefit of their owners, the shareholders.
STM Communications under Scrutiny
101
This balance in the world structure of scientific publishing is now changing. There has been a dramatic emergence of Far Eastern countries as producers of research output and therefore publications. As will be outlined later, China in particular has become a growing force in scientific information, which has resulted in all leading scientific publishers establishing a physical presence in this market to avoid being left out in the cold. This is changing some of the strategies being adopted by publishers in reaching out to new markets. So far, the outreach in the Far East is to market sectors which are comparable to those being addressed in the West – academia and large corporate R&Ds. As such the publishers’ revenues and profits remain high, with the growth in the Far East compensating for static or in some instances declining library markets in the west. Overall, scientific publishing remains on a high. But could this change as the elements of the perfect storm begin to bite? There are indications of some concern among a few of the leading publishers, and also among some of the investment community.
STM Communications under Scrutiny STM Publishers To many industry watchers, there are some bleak aspects to the scientific communication process – a series of restrictive barriers placed along the road to effective enablement of scientific progress. University libraries are considered as closed shops, benefitting the very few; online publisher sites are blocked to public access; in many countries books and e-theses remain inaccessible; academic monographs are available mainly on paper and at prohibitive prices. And STM journals, the heartbeat of the research process, are becoming too expensive and restricted to an elitist group. Given that some of these challenges have received public attention in recent years, it is perhaps inevitable that the STM publishing sector has been subject to some criticisms about the way it operates (see pundits’ views in previous chapter). Critics have complained of a sense of injustice that publishers were contributing too little to the research process, notably the progress of science, and taking too much in short term revenues. Other criticisms were based on the claim that STM publishers had not fully adapted to the new information millennium, and instead of providing leadership in promoting new and effective communication models had resorted to protecting a system which was of a more historical relevance. But the real complaint rests on the inability of people who
102 Chapter 9
The Main Stakeholders
may benefit from access to scientific publishing being denied access for a number of technical or administrative reasons. In early 2012, a web site was created where authors of research output could publicly commit to a boycott of Elsevier who was seen as the leading villain among commercial journal publishers (Gowers, 2012). The boycott took the form of refusing to publish in, or referee and/or perform editorial services for its journals. If the boycott had become widespread, it would have posed a serious problem for Elsevier, because it would stem the flow of papers into its journals, with obvious implications for its revenues. Conscious of the potential harm that it could have on its business were it to escalate, Elsevier was stung into responding to the critical campaign – that it needs to talk to the world, and not just its customers and investors. Thus far the STM publishing industry has not been particularly effective in highlighting how important their activities are in providing a quality and stable forum for scientific communication. The various STM national and international trade associations have done much to promote their cause, but their voices have been dwarfed by their critics in other media, the blogs, listservs, etc. The latter have usually based their criticisms on emotional issues rather than hard evidence. Nevertheless, the Public Relations exercise has largely been won in recent years by those who claim that ‘there is a better way.’ It is particularly the case in the U.S.A. where the involvement by Congress in how research results, funded as a result of federal R&D, should be disseminated, has reached a high platform (see chapter 25 on Open Access). The movement to make taxpayer-funded research freely available online hit a milestone in late May 2012 when advocates reached their goal in a “We the People” petition to the Obama administration. The petition, created by the Access2Research group, in support of the ‘free’ Open Access business model demanded that President Obama make taxpayer-funded research freely available. According to the petition site’s rules, any petition securing 25,000 signatures within 30 days would be sent to the White House Chief of Staff, and will receive an official response. The Open Access petition hit the 25,000 mark in half the allotted time and reached 65,700 signatories when the White House released its policy on publications derived from federal funded research (see chapter 26 on Political Initiatives). Similarly in the U.K. there was, in early 2012, a review undertaken sponsored by the government into how greater openness can be achieved (through a cross stakeholder committee under the chairmanship of Dame Janet Finch from the University of Manchester) (see RIN, 2012), This review also endorsed greater freedom in enabling access to public-funded research outputs.
STM Communications under Scrutiny
103
STM publishers say that the risks involved in changing the STM publishing system are significant, and there is no proof that any of the alternative paradigms being suggested are tenable, viable or sustainable. However, that one should not ‘throw the baby out with the bathwater’ is not a pivotal answer to some of the critics of STM. More insightful analysis and positive strategies are required.
Research Libraries In December 2009, the results of a global survey of library budgets were published by the CIBER research unit, then part of University College London (CIBER, 2009). The survey, the first fruits of the Charleston Observatory in the U.S.A. and based on responses from 835 libraries worldwide, was co-sponsored by Baker & Taylor’s YBP Library Services and also e-book provider ebrary. The majority of respondents were from the U.S.A. (62.3%), followed by the U.K. (12.7%). Universities made up almost two thirds (63.6%) of the sample, the remainder including further education and community colleges, government libraries, national libraries, hospitals, and corporations. Based on the findings, the report suggested “for the vast majority (of libraries) looking forward into the next two years …… recessionary pressures really are going to make an impact.” 37.4% of institutions were expected to cut spending on information resources over the following two years. 28.3% expected to cut staffing budgets, and 18.1% their spending on services and infrastructure. The survey suggested that academic libraries would be the hardest hit by the budgetary pressures, with 34.3% of them expecting to receive a smaller budget in two years time than they did currently (in 2009). The report also indicated that “for a small minority, 6.9%, the pain will be very severe, since their budgets will be more than 10% smaller than this year.” 29.9% of public sector and government libraries and 23.5% of libraries in the corporate sectors also anticipated smaller total library budgets. Much of the shortfall would be absorbed through reduced spending on information content, with “69.1% of respondents in all sectors and territories expecting to spend the same or less than they do today in absolute terms”. Academic libraries, especially in the U.K. were the most pessimistic about the short term outlook – expecting to be in a significantly worse position in two years time that they were at the time of the survey. In a series of comments made on one of the industry listservs, a leading US academic librarian (Rick Anderson, University of Utah) has succinctly pointed out the options which were available to him in resolving his own institution’s ‘serials crisis.’ Firstly, he commented in Scholarly Kitchen (Anderson, 2011):
104 Chapter 9
The Main Stakeholders
The problem with the subscription model is simple: it’s fundamentally irrational. It requires me to pay a year in advance for a series of bundles of articles, many of which my patrons don’t actually need. At the right price that model can be perfectly sustainable, but that doesn’t make it any less silly as a model.
Anderson also referred to the suggestion that publishers may raise their article prices (for document delivery from their servers) to counteract the decline in subscription revenues by saying, in a listserv on 8th November 2011: I fully expect publishers to do whatever they can to continue realizing constantly-increasing revenues from their library customers. It’s only reasonable that they should do so, especially given that they’ve been able to secure increasing revenues every year for the past few decades. The problem is that they’re going to fail, because throughout those past decades, the curves of library budgets (which have tended to increase annually at lowsingle-digit rates) and journal prices (which have tended to increase at rates of 9–10% per year) have been headed for an inevitable and permanent divergence. Libraries have been warning publishers about this since as long ago as the 1970s; the standard response has been “Yeah, you keep saying there’s a pricing crisis, but then you renew all your subscriptions.” Now the divergence has come. Most of us can no longer redirect money from other budget areas to cover extortionate subscription price increases, so now we’re cancelling, and some of us are doing so in great chunks. Publishers (in the aggregate) aren’t going to be able to continue realizing annually-increasing revenues from libraries anymore for the same reason that you can’t realize blood from a rock. Charging more for articles isn’t going to help publishers at all, because the money they want simply isn’t there to be had.
Additional weight to the librarians’ concerns appeared in a letter sent in April 2012 by Harvard’s Faculty Advisory Council to the faculty concerning what it alleged was a crisis with its scientific journal subscriptions. The letter – entitled ‘Major Periodical Subscriptions Cannot Be Sustained’ reported an “untenable situation facing the Harvard Library” in which “many large journal publishers have made the scientific communication environment fiscally unsustainable and academically restrictive.” The letter revealed that Harvard was paying $3.75 million annually in journal subscriptions and that they made up “10% of all collection costs for everything the Library acquires.” A few of the journals, it said, cost upward of $40,000 a year each. “Prices for online content from two providers have increased by about 145% over the past six years, which far exceeds not only the consumer price index, but also the higher education and the library price indices.” It concludes by saying that “Major periodical subscriptions, especially to electronic journals published by historically key providers, cannot be sustained.” (Harvard University, 2012). One commentator summarised the letter with “The wealthiest university on Earth can’t afford its academic journal subscriptions.”
Challenges facing Scientific Communication in Difficult Economic Times
105
The crux of the problem – the difference between publishers’ expectations for future growth in line with the output of research literature, and the libraries’ expectations of stable if not falling budgets because of cutbacks by the institutions – is a universal challenge which the industry was faced with during the past few decades. It is not just a U.S. problem; it also applies to the U.K. It suggests that there is a fundamental malaise within the industry which is caught between expectations of the publishers and realities facing the main research libraries. In this sense the need to look at whether the locked-in position which scientific information has achieved – keeping the majority of the world’s knowledge workers outside looking in – should finally be addressed in a constructive and viable way.
Challenges facing Scientific Communication in Difficult Economic Times It is claimed that libraries have been compelled to cut back on acquisitions and other operating costs on a sustained basis. In face of what may be perceived to be an erosion of the library as a core function or service within research institutions, some library advocates have recommended a more radical approach to adapting the publication paradigm in favour of one which takes a higher moral ground, one which is more favourable to society’s overall needs. This has involved questioning the dominant position which publishers have in the flow of information from creator or author to user. It is the control that commercial publishers have over the dissemination of research results which is, according to critics, stultifying the whole information system. On the other hand, publishers have claimed that the so-called ‘serials crisis’ was largely of the making of librarians – that they, the librarians, had failed in their in-house public relations and organisational politicking to convince their financial paymasters of the need for additional funds to buy essential publications. The importance of the library as the centre for knowledge exchange within an institution was being weakened and librarians had failed to come up with satisfactory arguments to protect themselves and their budgets. Neither of the key stakeholders is in a strong strategic position to take the publication of research results into a digital environment. A different approach is necessary, and this fresh approach would need to take into account the potential offered by expanding the reach beyond the walls of academia – to include knowledge workers more generally.
106 Chapter 9
The Main Stakeholders
Summary What the above indicates is that the jury is still out on whether the existing structure for scientific communication is suitable for a fully digital world where different cultural expectations prevail, and in particular whether the structure is suitable for adaptation to allow many more parts of society to embrace and benefit from the results of scientific research which the public has in large part funded through their taxes. Therefore, the question remains: “Are academic journals a ‘threat to scientific communication’ and the ‘advancement of research’?” Perhaps it is necessary to imagine what scientific research would be without them, or what science would be like if citation metrics and journal prestige were replaced with article downloads and social networking services such as ‘Twitter’ or ‘Digg.’ Publishers claim that journals offer more benefit to scientific communication and the advancement of science than harm. In some researchers’ minds this issue is still not resolved. For the purposes of this study, it is sufficient to be aware that there are many researchers and academics questioning the current state of scientific publishing, and that the questions being asked are relevant to how UKWs can be integrated into a future communication system. At present they are not represented – they are very much on the outside looking in. There are some trends which suggest that this disenfranchisement from the results of research output may not be as severe in future. One of these trends is that their eyes are being opened as to what research results are available. The ease with which discovery of relevance can be achieved is a major step towards becoming participants in a larger scholarly communication process in future.
Chapter 10 Search and Discovery Introduction This chapter looks at the way individuals use the tools available on the Internet to widen their access to sources of information that may provide them with answers. Once a printed publication becomes digital there are many navigational routes to it and within it, and this offers opportunity for greater efficiency in information use. The pressure is on software development to provide effective discovery and search.
Search and Discovery The latent audience for scientific communications could have remained, under the print paradigm, isolated from the mainstream of research output, even though some of it could be of benefit or interest to them. Google, Yahoo and other global search engines have since transformed the process of identifying relevant publications and given everyone equal access to sources of information that were formerly the preserve of only the ‘privileged’ academic researcher. As their coverage grew, so the potential for people to key their information requests into an intuitive online search box has also increased. Search engines point users from all walks of life, including knowledge workers, to relevant and latest items of information. It was only with the arrival of these powerful search engines that the means whereby those who are at the fringes of the STM system were informed of the existence of research outputs of possible relevance to them. However, it does not always enable the searcher to get full access, but at least they are made aware of the existence of a relevant item. An International Data Corporation report published in May 2009 indicated that, among U.S. knowledge workers, 70% turn first to the web to conduct research, and in turn spend approximately 25 hours per month gathering information for both personal and professional purposes. The IDC study surveyed more than 700 knowledge workers who were asked about the tasks that they perform regularly and software/applications they used. The study confirmed that email was by far the most time-consuming activity for information workers, followed by searching for information. Activities that require working with others, including communicating and collaborating with, or managing projects,
108 Chapter 10
Search and Discovery
come next followed by creating and publishing information (International Data Corporation, 2009). With Web 2.0 applications creeping into the enterprise, information workers find tools to help them accomplish their work. Newer tools, particularly instant messaging, but also social networking and blogs, were preferred over more traditional ones such as email or team workspaces, according to IDC.
Discovery of Content in Scientific Journals In November 2012, Tracy Gardner and Simon Inger (U.K. consultants) published the results of an investigation into ‘how readers discover content in scientific journals’ (Gardner & Inger, 2012). The survey, the third in a series reporting on the same theme, was based on 19,000 responses to a questionnaire among academics and researchers worldwide. 33% of the responses came from Europe, 30% from North America, and 24% from Asia. In terms of country, U.S. responses were the most numerous (24%) with the U.K. providing 11.5%. Of the total responses, there was a heavy concentration from academics – 68%. This was largely because the mailing lists were sourced from established scientific publishers whose primary target audiences were in academia. 6% came from the corporate sector, and a further 9% were from the medical professions. This left 17% as possibly reflecting the interests of Unaffiliated Knowledge Workers. The study identified three types of reader behaviour – citation searching, core journal browsing, and specific subject searching. The aim was to see how researchers chose their starting point using online systems and how they navigate to journal content. The study was intended to help publishers design their ‘landing pages’ to meet the demand pattern shown by the respondents for article access. It appeared that the use of bibliographic databases as a starting point was important when it came to citation searching. PubMed was important for medical practitioners in particular. Library web pages were on the decline for citation searching purposes. Academic search engines, however, such as Google Scholar, were more popular than general web search engines and are the second most popular starting point after bibliographic databases. Even for the second type of behaviour – journal browsing – abstract and indexing databases continue to grow in use. However, in this case, journal home pages were becoming more popular as researchers became familiar with publisher brands. Even with the third type of behaviour – subject searching – again specialist bibliographic databases (abstract and indexing, or A&I services) remained the most popular resource. Library web pages have also grown in popularity, whereas web pages have declined.
Is There a Problem with Reach?
109
In terms of alternative search options, the study found that social networking links, journal bookmarks, and saved search alerts were not used frequently by researchers. Web pages created by societies, whilst not as important as the A&I and academic search engines, had nonetheless grown in importance in all three behavioural studies. Google and Google Scholar are clearly dominant, with Microsoft Academic Search, Yahoo, Bing, Scirus, and Baidu as also-rans. Information managers made marginally more use of Scirus (from Elsevier) as opposed to students or researchers. Students used Google Scholar slightly more than Google. Surprisingly, academic researchers used Google more than Google Scholar, possibly because they were higher users of A&I databases and used search engines for more general search, negating some of the need for Google Scholar. The academic sector used mobile phones for journal access less than the medical, charity and corporate sectors. Those in the corporate environments tended to be better equipped with smartphones than their academic counterparts. As metadata distribution gets maximised, and users are able to freely choose their preferred routes to content, many of the advanced search features were migrating to the main discovery platforms, leaving publishers’ web sites as silos for content.
Is There a Problem with Reach? According to one U.S. commentator (Dana Roth, Caltech University library), there is a tendency to overly generalise the access problem which, in his opinion, is primarily a problem affecting biomedical literature. However he is looking at the issue purely as an academic librarian at a prestigious academic centre, and not taking into account the millions who make up the information ‘long tail’. As reported by Roth “Lack of access, by members of the general public who need to go from PubMed to the full text, is obviously very frustrating”. His sense is that few ‘serious’ researchers or students are truly having a problem with access to the scientific literature: Granted there are problems for non-subscribers desirous of immediate … seamless … access. But options such as institutional document delivery, visiting or contacting a friend at a subscribing library, direct purchase of individual articles, author websites, institutional repositories, etc. … I doubt that very many researchers are having a serious problem with access.
Not a recipe for an efficient information system, though. And it marginalises the role of the library in meeting a wider demand from the unaffiliated.
110 Chapter 10
Search and Discovery
‘Blown to Bits’ – Resource Discovery and Navigation As first conceptualised by Evans and Wurster (2000) in “Blown to Bits”, once the glue had been melted between the descriptors (headers, metadata) and the ‘stuff’ (the item, the full text article, the chapter), a new business was born. Search engines combined all the available descriptors or metadata within their single searchable system. The ease with which this has been done, and the sophisticated technology for indexing and tracing relationships between the indexed items, provided a new and powerful inroad into information and data. Suddenly a new audience emerged; one that had never adapted to the complex search routines of traditional abstract and indexing services. These new users went for simplicity, ease of use, and ‘something is good enough’ approach. Amazon indicated how quickly the traditional bookselling business could be overtaken by the new services dealing solely in book metadata. Amazon’s corporate value soon outpaced the value of the entire physical bookselling industry. Another leading proponent of this new style of resource discovery for wider media access was Google. Google, within ten years, achieved a corporate net worth greater than the sum of the electronic publishing industry on which it relied for some of its content. It became the elephant in the information room. The emergence of Google is worthy of further analysis, as it is a mechanism which has become essential in democratising access to scientific literature.
Chapter 11 Impact of Google The Google Mantra Google is a creature of the Internet. The web emerged in 1994 from Netscape Communications, and Yahoo created a hierarchical structure for content. However, between 1995 and 1998 two doctoral candidates at Stanford University (Larry Page and Sergey Brin) developed a ranking system that incorporated ‘relevance’ (defined by a unique and proprietary use of a hundred or so different issues) and also page ranking, as the underlying search process. It was intended that access to the service should be free to all, not just the privileged few nor those who were prepared to pay for access. Its aim was to become the free public library online. The system made its debut in 1998 and immediately the market swung away from services such as AltaVista and Yahoo in favour of the new ranking methodology underpinning Google. Viral marketing ensured that it took off as the preferred search engine of choice. Google has single-handedly transformed the scientific information landscape by means of introducing its patented search engine approach as well as bringing in a number of new ventures. These include Google Print, Google Scholar, Google+ and Google’s library digitisation programme. There is little doubt that Google’s intervention has done much to stimulate new and innovative approaches to search and discovery of digital information. As a result, Google now benefits from the strength of its brand. This came out strongly during tests undertaken by the research company Vividence in the U.S. Answers given to questions set for different search engines were compared, and it was concluded that the final differences were not significant. But according to Vividence, Google shone through in having more subjective customer satisfaction rating, which goes to show that “search is a potentially fickle brand game, resting on perceptions and preferences rather than performance” (Outsell, 2009). Nevertheless, as a result of its branding and innovative approach, Google has become the first source for scientific information by academics, researchers and knowledge workers. Where does Google’s revenue come from to fund such a range of search services for which it apparently makes no charge, particularly given that it is a search engine operating in what is essentially part of the ‘free access to information’ ethos of the Internet? In 2001, 77% of Google’s revenues of $13.43 billion were generated through advertising, with the rest coming from enterprise search
112 Chapter 11
Impact of Google
and other services. By 2003, the advertising component had risen to 95%. Google’s future is heavily reliant on the ability to link an advertisement directly to the needs or profile of the online searcher. This is what makes it so attractive for the advertising industry. The wide reach of the service, combined with the extensive collection of digital material ensures that by linking advertisements to a relevant search, optimal impact on the market for advertised goods and services can be achieved. There have been many books written about Google. One insightful analysis came from John Battelle. In his book “The Search – How Google and its Rivals Rewrote the Rules of Business and Transformed our Culture,” Battelle showed how rapid the transformation had been made by the two founding entrepreneurs (Battelle, 2005). They have potentially opened up scientific communications to a much wider audience. Search as a process, according to Battelle, has not always been seen as the powerful force it has since become. In the late 1990s and early years of this decade, the concept of the ‘portal’, attracting the eyeballs of users and leading them through ‘stickiness’ to a range of owned and proprietary services, captured the imagination of entrepreneurs and venture capitalists alike. The mathematical algorithm which became PageRank, was the heart of the current Google search process. Page and Brin developed the concept with a handful of employees and a rented office suite in a private house. They, perhaps arrogantly according to Battelle, cocked a snoop at the rest of the industry and remained focused on its core activity at that stage – the less interesting search process. Everyone else was trying to develop locked-in communities and paidfor portals. Whether it was inspiration, arrogance or pure good fortune, search engines evolved and became the key players during the past decade and a half, with their millions of dedicated followers and tremendous traffic. Using the many digital users as the platform, Google eventually built the AdWord and AdSense contextual advertising systems, which provided a massive injection of revenues and led to a large number of the then 1,000–3,000 employees becoming millionaires overnight when the company made its controversial IPO (Initial Public Offering) stock auction in August 2004. But as Battelle’s ‘Search’ book reflects, it has not been an easy ride. Conflicts in culture emerged as the original motto for the company – ‘Don’t be evil’ – came up against the hard world of commerce. Page and Brin, with a professional CEO then acting as part of the governing triumvirate, seemed almost apologetic on taking advertising, on issuing an IPO which ignored Wall Street practices, and on its dealings with China which demanded censorship of certain sites. The less contentious corporate mission of ‘organising the world’s information and
The Google Mantra
113
making it accessible’ has become their widely used subsequent mission, “and the company may have made its peace with the devil.” (Battelle, 2005) However, Google is not out of the woods. The U.S. Patriot Act, whereby the federal government increased its powers for tapping into not only telephone conversations but also e-mails and web usage, highlights the sensitive nature of the data available within banks of parallel running computers at Google and other major search engines. A digital footprint follows every user of the service and can be mapped and used for a variety of purposes. The ‘clickstream’ has become “the exhaust of our lives”, and easily monitored. Should this data stream or personal digital footprints be made available to the government? Is trust being broken by doing so? This is a heavy issue and one which is still running its course, particularly in view of the U.S. government’s PRISM programme. PRISM is a programme under which the United States National Security Agency (NSA) collects Internet communications of foreign nationals from at least nine major US Internet companies. It was launched in 2007. PRISM includes Google as a target. Also, Google is making enemies in its own and adjacent industries. Some users now keep their photos, blogs, videos, calendars, e-mail, news feeds, maps, contacts, social networks, documents, spreadsheets, presentations, and creditcard information – in short, much of their lives – on Google’s computers. Google could soon, if it wanted, compile dossiers on specific individuals. This presents “perhaps the most difficult privacy issues in all of human history,” says Edward Felten, a privacy expert at Princeton University. In May 2014, Google lost a European court battle which challenged Google’s ability to keep personal information online and linked to despite the individual concerned claiming that in so doing it infringed their personal rights. Speaking for many, John Battelle recently wrote on his blog that “I’ve found myself more and more wary” of Google “out of some primal, lizard-brain fear of giving too much control of my data to one source.” In future, Google must maintain or improve the way it deals with the privacy issue. It also needs to resolve how adverts are inserted next to searches. Currently it has far higher ‘click-through rates’ than any of its competitors because it made these adverts more useful, so that web users click on them more often, and this has to be continually improved to ensure advertising revenues continue to flood in. The machinery that represents the fixed costs in doing this is Google’s key asset. Google has built, in effect, the world’s largest supercomputer. It consists of vast clusters of servers, spread over enormous data centres around the world. The details are Google’s best-kept secret. But the result is to provide a ‘cloud’ of computing power that is flexible enough “automatically to move digital load around and between data centres.” If, for example, there is unexpected demand
114 Chapter 11
Impact of Google
for Gmail, Google’s e-mail service, the system instantly allocates more processors and storage to it, without the need for human intervention. This infrastructure means that Google can launch any new service at negligible cost or risk. If it fails, so be it; if it succeeds, the cloud makes room for it. Beyond its attempts to expand into new markets, the big question is how Google will respond if its success is interrupted. “It’s axiomatic that companies eventually have crises,” claimed Eric Schmidt, Google’s former CEO. History suggests that “tech companies that are dominant have trouble from within, not from competitors.” In Google’s case, Schmidt says, “I worry about the scaling of the company.” Its ability to attract new staff has been a competitive asset, since Google can afford to hire talent pre-emptively, making it unavailable to Microsoft, Yahoo! and others. Only Apple has come anywhere near Google in this respect. Google wins talent wars because its brand is sexier and its perks are lavish. ‘Googlers’ (employees) commute on discreet shuttle buses (equipped with wireless broadband and running on bio-diesel) to ‘GooglePlex’, which is a playground of lava lamps, volleyball courts, swimming pools, free and good restaurants, massage rooms, etc. Although as early as 2005 it had 2,230 new jobs available, it received over 1.1 million applicants. This says a lot about the company’s image and its extensive employee benefits. In theory, all Googlers, down to receptionists, can spend one-fifth of their time exploring any new idea. New projects have come out of this, including Google News, Gmail, and even commuter shuttles and their Wi-Fi systems. But it is not clear that the company as a whole is more innovative as a result. It still has only one proven revenue source and most big innovations, such as YouTube, Google Earth and the productivity applications, have come through acquisitions. As things stand today, Google has little to worry about. Most users continue to Google with abandon. The company faces lawsuits, but those are more of a nuisance than a threat. It dominates its rivals in the areas that matter, the server cloud is ready for new tasks and the cash keeps flowing in. According to Battelle the test comes when the good times end (Battelle, 2005). It is too soon to claim that the days of the subscription-based scientific indexing databases are over, but they certainly face an increasing challenge from the free services such as Google Scholar, PubMed, Scirus, etc. (Chen, 2010).
The Google Generation This term ‘the Google Generation’ was popularised amongst others by CIBER in a study which was undertaken for the British Library and Jisc and was published
The Google Generation
115
in January 2008 (Rowlands, 2008). In this report the ‘Google Generation’ was defined as those born after 1993, those who had little experience of life without the web being in existence. They maintained constant connectivity with friends and family at any time and at any place. It was the aim of the study to see whether this generation was qualitatively different in the way it looked for information from the pre-Google cohort, and what effect this might have on the future structure of publications. The Google Generation study produced some key headlines. These included: – ‘Bouncing’ behaviour, whereby people skim one or two pages from an online site before moving on. 60% of e-journal users view no more than three pages, and 65% never returned to the site. – Navigational behaviour. People find as much satisfaction from looking around online as from viewing the actual content. – Focus. The average time spent on e-books is four minutes; on e-journals eight minutes. This has led to a new form of ‘power browsing’. – Squirreling’. There is evidence that end users download information, especially when free, but no evidence on whether they actually read the printedout material. There is a suspicion they may feel that there is an osmosis taking place, that they will naturally absorb the information as a by-product of the download. – One size does not fit all. There are variations in behaviour according to location, gender, institution, discipline, etc. – Authority. There are a variety sources used, including Google, to cross check on their search results. The information literacy of young people has not improved with greater access to technology. This relates both to the current use of the Internet by young people and, a technology generation earlier, to their use of early online systems and CD-ROMs. There is little direct evidence that young people’s information literacy is any better or worse than before. However, the ubiquitous use of highly branded search engines raises other issues: – Young people have unsophisticated mental maps of what the Internet is, often failing to appreciate that it is a collection of networked resources from different providers. As a result, the search engine, be it Yahoo or Google, becomes the primary brand they associate with the Internet. – Many young people do not find library-sponsored resources intuitive and therefore prefer to use Google or Yahoo instead: these offer a familiar, if simplistic solutions, for their study needs. Use within a physical library suffers as a result.
116 Chapter 11
Impact of Google
Whilst it has always been assumed that the young ‘Net-Geners’ are closer to developments, firstly with the Internet and subsequently with specific online applications, the feeling has been that the older generation has been catching up (Madden, 2005; Dutton & Helsper, 2007, reporting through the Pew Internet and Life Project). While many age groups up to 65 years used the Internet for basic functions such as email, more advanced activities such as file sharing, personal transactions, and online content creation were much more prevalent among young people under 30 years of age. As early as 2007, Pew Research in the U.S.A. found that 55% of those Americans who were online in the 12–17 year old bracket used online social networking sites (Lenhart et al., 2007). Pew Research has also revealed that the Internet has emerged as a major source of information about science. Horrigan reported that one in five Americans turned to the Internet for most of their science news. (Horrigan, 2006, for Pew Internet). The big question raised above is whether, and to what extent, behaviour, attitudes and preferences of today’s Google generation will persist as they mature and some become academics, knowledge workers and scholars? It would be dangerous to stereotype a whole generation. The demographics of Internet and media consumption are rapidly eroding generational differences. The reality is that more people across all age groups are using the Internet and Web 2.0 technologies widely and for a variety of purposes. The young (not just the Google Generation but also Generation Y, the next generation) may have been the earliest adopters, but now older users are fast catching up – the so-called ‘Silver Surfers’. In many ways the ‘Google generation’ label reflecting a specific age bracket is increasingly a diversion. In fact, the CIBER study revealed that age is not the great separator between generations that was assumed by some. There is little direct evidence that young people’s information literacy is any better or worse than before. The culture of the discipline is as much a determinant of usage patterns. Nevertheless, partial evidence from older age groups might indicate that there could be a rapid migration away from use of the physical library to a virtual library, which raises questions about relevance of the academic library and librarians in future. Other than this there appears “… very little evidence of generational shifts in the literature: that Google generation youngsters are fundamentally ‘different’” (Battelle, 2005). By the same token, the CIBER study also commented on an OCLC report carried out in 2007 which suggested that the use of social networking (Web 2) tools was not very popular among college students (OCLC, 2007). On average the various social networking services varied from 7% to 12% of usage by the respondents to OCLC. There was more of a generational divide over the issue of IPR (intellectual property rights) and copyright, with the young people feeling that copyright regimes are unfair and unjust. This has potentially serious conse-
Ambient Findability
117
quences on the stability of the scientific information industry in the next decade or so, as these critics become the users and the policy setters of the future. CIBER followed up their British Library commissioned study with a further study along similar lines, this time in collaboration with the BBC, in 2010/11. This later study also indicated that there are some profound differences between younger and older members of the general public when given a set task: find the answer to the question “Where did the first commercial flight land?” using a search engine of their choice. On average, older respondents took more time (3 minutes 34 seconds) combing the Internet for information than those aged under 20 years (38 seconds). They performed almost twice as many searches, visited almost twice as many domains, and viewed two and half times as many pages. The average time spent ‘reading’ each page was 20.4 and 7.5 seconds respectively for the over- and under-20s! The knee-jerk ‘digital natives’ interpretation of these findings is that young people are good at technology, and searching for information online is almost as natural as breathing. However they were much more likely to enter a search statement that bore a close resemblance to the question, making them, perhaps, the ‘cut and paste’ generation. Use of new information technology overall is not necessarily the preserve of the Google Generation alone. As such the ‘Google Generation’ label is perhaps a distraction, and individual personality issues are more important in determining the channels used to search for and discover information.
Ambient Findability It is not just a generational issue. Peter Morville wrote in his book “Ambient Findability” about the overall challenges in finding one’s way through the mass of data and information currently available (Morville, 2011). He points out that this is evolving and not necessarily consistent – he quotes “the future exists today – it is merely unevenly distributed”. In his book Morville challenges the suggestion that Google offers a good service – “if you really want to know about a medical complaint you don’t rely on Google but rather on NIH’s PubMed database”. People are faced with a bewildering and growing array of information formats (magazines, billboards, TV, etc), which Morville claims leads to loss of literacy. Ambient findability is less about the computer than the complex interactions between humans and information. All our information needs will not necessarily be met easily, he claims. Information anxiety will intensify, and we will spend more time rather than less searching for what we need. ‘Information overload’ is reinvented in the digital age. Search engines are not necessarily up to the task
118 Chapter 11
Impact of Google
of meeting future needs. They tend to be out-of-date and inaccurate. However, they are trying to rectify some of the emerging weaknesses by improving their technology. For example, they undertake SEO – Search Engine Optimisation – ensuring that the software throws up the top ten results that are most relevant to the end users. Publishers, in turn, try to anticipate this by making sure that they understand the new algorithm and include data in their published material (metadata) that ensures a higher listing in a returns set. Whilst search engines pride themselves on speed, this excludes the subsequent activity the end user has to go through in bypassing ‘splash’ pages and other interferences in reaching the source data. (Splash pages on a Web site are what the user first sees before being given the option to continue to the main content of the site). According to Bill Gates (Microsoft), what is important is not the search itself but rather getting to the answers. Faced with the above there are opportunities for new types of ‘vertical’ search services. A vertical search engine, as distinct from a general web search engine, focuses on a specific segment of online content. They are also called specialty or topical search engines. The vertical content area may be based on topicality, media type, or genre of content, such as a research discipline. Google cannot be as precise and filtered as a targeted vertical search service. There are few examples of such services at present (see under Communities in chapter 17); the question is whether these can pull back search activity from the entrenched position large generic search engines have established. The problem is that this requires hard work and investment – few publishers have shown any inclination to create such platforms either individually or in unison. Consequently, the ‘long tail’ of the UKWs are not being provided with a good service. They have to depend on the broad sweep of sources collected within Google and similar generic search engines.
Wikinomics Several observers of the information industry feel that social media are about to generate a transformation in the pattern of user behaviour and may offer some answers to the current search problem. Two authors coined the term ‘Wikinomics’. Wikinomics’ is essentially the process of capturing at grass roots level the power and range of a diffuse social network. It is described in a book entitled “Wikinomics – How Mass Collaboration Changes Everything” (Tapscott & Williams, 2006). Historically, individuals have occupied their own information space, but with the ubiquity of personal computers, free Internet telephony, open source software, and global outsourcing platforms, a new form of commu-
Wikinomics
119
nication has arisen, which is increasingly impacting on STM and the ‘digital natives’. Is this new social networking system a passing fad or likely to revolutionalise the peer review system, the bedrock on which current scientific publishing is based? The authors believed so. The evidence seems to be that the emerging principles of openness, sharing (of some, not all, of their IP), collaborating, peering, and acting globally are migrating from more general information technology and telecommunication areas (in their adoption of ‘open source’ for example) to scientific information. The web itself is also undergoing change which supports the above. A further aspect is that the new generation, or ‘NetGeners,’ is changing habits, with networking becoming a built in feature of their digital lives. It appears in their adoption of MySpace, FaceBook, LinkedIn, Wikipedia, etc. The Pew Internet and Life report claims that 57% of U.S. teenagers are ‘content creators’ (Horrigan, 2006). The norms of work for the ‘NetGen’ are speed, freedom, openness, innovation, mobility, authenticity and playfulness. The web is becoming a massive playground of information bits that are shared and remixed openly. The new web is about participating, not passively receiving information. The pressure to create the new Wikinomics comes from several directions. These are: – “Peer Pioneers” – the open access innovators. These include Wikipedia’s 5,000 regular editors who support Wikipedia with each article being edited many times. – “Ideogoras” – global pools of skilled talent such as through InnoCentive (90,000 scientists in 175 countries) used by Boeing, Dow, Dupont, Novartis, to support their research and development programmes. – “Prosumers” – the new hackers. Based on services such as Second Life (from Linden Labs), services are emerging which in this case support 325,000 participants. Lego uses mindstorms.lego.com to develop new products. iPod and Sony (PSP) are involved in ‘culture remixing’ (Lessig) and so-called ‘mashups’ which involves mixing content from different sources and media. – “New Alexandrians” – are those who improve the human lot by sharing simple but powerful ideas. Open access publishing is one such idea. It no longer involves hoarding corporate knowledge. We are now in the age of collaborative science (or possibly Science 2.0). Conventional scientific publishing involves many collaborators – over 170 in a single high energy physics experiment for example. We see the emergence of the Large Hadron Collider and the Earth Grid System for climate, astronomy and environment. The results from these will be vetted by hundreds of participants, and not a few anonymous referees. Blogs, wikis and datasets are also heralding the arrival of Science 2.0. The stumbling block is cultural.
120 Chapter 11
Impact of Google
It is assumed that information silos will eventually give way to networked information systems. This involves: – “Platforms for Participation” – organisations create open stages to enable new businesses to be created. Housing maps, crime activity in Chicago, impact from Hurricane Katrina, are examples. The platform may be a product (e.g., a car or an iPod), a software module (Google Maps), a transaction engine (Amazon), a data set (Scorecard), etc. The key is for the platform to generate a network of participants. – “Wiki Workplace” – a new corporate meritocracy is being created. Earlier generations valued loyalty, security, authority – the ‘Geek Generation’ supports creativity, social connectivity, fun, freedom, speed and diversity in the workplace. There is a bottom up approach to innovation; 250,000 are active on Slashdot; thousands on Linux; 140,000 application developers work on Amazon – they ‘employ’ staff who are constantly changing and in flux, not the comparatively few fixed employees within an individual company. 40% of IBM no longer work in traditional offices. Consultancy and selfemployment could become the new working model in the digital age. Most journal publishers are too small, too traditional, and too locked into the ‘closed’ system of profitable subscription-based publishing to see the above as anything other than threatening. It is contended by the new school of scientific communicators that, as new platforms for collaboration emerge, and a new generation of those who are accustomed to collaborating arise, and a new global economy develops which supports new forms of economic cooperation, the conditions are emerging for ‘the perfect storm’, one which has already had a marked effect on the R&D strategy in many large companies. What is needed is a new mindset by the world of users, a major cultural change to accept the new forms of STM communication that are becoming possible. A more open and democratic exchange of scientific output.
Chapter 12 Psychological Issues Introduction According to the cognitive expert Professor Maryanne Wolf, Director of the Center for Reading and Language Research at the Eliot-Pearson Department of Child Development at Tufts University, parents, teachers, and scholars are beginning to question how our immersion in this increasingly digital world will shape the next generation’s relationship to reading, learning, and how it will affect knowledge itself (Wolf, 2007). As a cognitive neuroscientist she was concerned with the plight of the reading brain as it encounters a technologicallyrich society. Over the last five thousand years, the act of reading has transformed the neural circuitry of the brain. However, according to Wolf, the ‘reading brain’ is now slowly becoming endangered – an unforeseen consequence of the transition to a digital era. Will children become so accustomed to immediate access to on-screen information that they will fail to probe beyond the information given on the screen to deeper layers of knowledge? Or, will the new demands of information technologies to multitask, integrate, and prioritise vast amounts of information help to develop equal, if not more valuable, skills that will increase human intellectual capacities, quality of life, and collective wisdom of humans as a species? There is little research that directly looks at these issues, but knowledge from the neuro-sciences about how the brain learns to read and how it learns to think about what it reads is relevant to the publications produced by STM publishers. Using neuro-imaging to scan the brains of novice readers enables researchers to observe how new neural circuitry is developed from some of its original structures. In the process, the brain is transformed in ways that we are only now beginning to appreciate. What does this mean for STM publishers and the wider potential audience of knowledge workers? Probably very little in the short term, but, as new generations focus on the screen-based digital delivery of information, new forms of information emerge. The formats which the new mindsets require may change. Whether STM publishers will adapt to the minds of the new generation is something that may determine whether they will survive in the 2020s and beyond.
122 Chapter 12
Psychological Issues
The Shallows The technology writer Nicholas G. Carr (Carr, 2010) has expanded on the Wolf argument and suggests that the Internet is creating a psycho-sociological revolution in the way people read, a revolution which in its effect is similar to that which occurred with the arrival of the printing press. There are benefits which come from using the Internet, but at the same time he points out that it has led to cognitive adaptations which need to be highlighted. Carr and Wolf are not alone in making the claim that there is a change in researchers’ behaviour patterns as a result of Google. Jaron Lanier (2010) made similar assertions in his book “You are not a gadget – A Manifesto”. In an article in the July/August 2008 edition of The Atlantic, Carr pointed to the many advantages of having immediate access to an incredibly rich store of information (Carr, 2008). He further elaborated on the idea in his book “The Shallows: What the Internet is doing to our Brains” (Carr, 2010). Research that once required days in the library can now be done online in minutes. He remembers the days when he could immerse himself in the experience of reading a book, allowing himself to get turned on by the narrative and the twists of the argument. Now he, and many literary people such as himself, find they can no longer indulge in deep reading – a few pages at a time is the maximum. Reading on the Internet is generally of a ‘shallower’ form in comparison with reading from printed books in which he believes a more intense and sustained form of reading is exercised. “The Net seems to be chipping away at my capacity for concentration and contemplation” says Carr. He claimed that he had “an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading”. In the past he was able to focus on reading a book or a lengthy article. Now his concentration often starts to drift after two or three pages. He “gets fidgety, loses the thread, begins looking for something else to do”. The deep reading that used to come naturally has become a struggle. As Carr mentions “When I mention my troubles with reading to friends, many say they’re suffering from similar afflictions. The more they use the Web the more they have to fight to stay focused”. He suggests the cause for this phenomenon is that he spends a lot of time online. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process. (McLuhan, 1967). Individuals using online sites have exhibited “a form of skimming activity,” hopping from one source to
Neuro-Plasticity
123
another and rarely returning to any source they had already visited (Rowlands, 2008). They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they would save a long article, but there is no evidence that they ever went back and actually read it. It is clear that users are not reading online in the traditional sense. All this is supported by the CIBER findings on the Google Generation referred to earlier. There are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. But it is not just a change in habits. It is also a change in the neuro-system. As Carr documents, the results of various neurological studies show that the brain is a muscle, and like a muscle it needs to be activated regularly to remain strong and healthy or it loses its power. The science behind this is wrapped up in the billions of synapses which trigger activity within different parts of the brain, and the stimulation of the synapses is critical for retaining memory, both short and long-term (Carr, 2010).
Neuro-Plasticity The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the many neurons inside our skulls, was largely fixed by the time one reached adulthood. But brain researchers have discovered that that this is not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the mature, adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to re-programme itself on the fly, altering the way it functions.” (Olds, 2008). The science of neurology is progressing all the time and in so doing highlights the fact that much of the change which is occurring as individuals adapt to the new technology can be rationalised in science. Scientists have confirmed the existence of nerve cells or neurons in the brain. Neurons have central cores, or somas, which carry out functions common to all cells but also have two kinds of tentacle appendages – axons and dendrites – that transmit and receive electric pulses. When a neuron is active a pulse flows from the soma to the tip of the axon, where it triggers the release of chemicals called neurotransmitters. The neuro-transmitters flow across the synapses, or gaps between the neurons, and attach themselves to a dendrite of a neighbouring neuron triggering a new electric pulse in that cell. Thoughts, memories, emotions, scientific activity – all
124 Chapter 12
Psychological Issues
result from the electrochemical interactions of neurons, mediated by synapses. Inside our skulls there are 100 billion neurons and each neuron can have a thousand synaptic connections, which shows how complex the understanding of behaviour patterns can be. The brain is now seen to be ‘plastic’, with new neural circuits being formed throughout one’s life. Old circuits may get stronger, weaker or wither away. The amount of use made of the circuit would determine whether it remains alive and healthy. The concept of the brain’s plasticity has only become part of neuro-science in the past two decades or so. Before that it was felt that the brain was hard wired and remained essentially unchanged once maturity had been reached. It has now been shown that it changes; it rewires itself, as circumstance and the environment change. However, it is claimed that plasticity diminishes as we get older, but it never goes away. On the other hand, the brain adapts to damage in the nervous system – it tries to compensate for blindness with enhancing the audio capabilities, or enhanced touch in reading Braille; it compensates for loss of hearing with enhanced peripheral vision; it explains the phenomenon of the ‘phantom’ limb among early stages of recuperating from amputation. “Evolution has given people a brain that can literally change its mind over and over again” (Carr, 2010). So, in addition to a social trend whereby collaboration and cooperation within social groupings is emerging as the way the ‘Net Generation’ conduct their information activities, we also have a change taking place at the personal psychological level. The brain is adapting to the new socio/technical environment within which the individual operates, giving him/her the ability to adapt and meet the demands of the new infrastructure. Google has taken this on board and has moved on from its corporate mission of ‘do no evil’, through ‘organising and making universally accessible the world’s information’ to attempting to provide ‘artificial intelligence’ comparable with if not superior to the human brain. This is a huge leap from the traditional highly literate approach in reading books and journal articles at leisure to one where the computer becomes more powerful and makes us all ‘stupid’.
‘The Virtual Revolution’ This issue was brought to a head in early 2011 by a four part series on BBC’s British television which looked at the ‘Virtual Revolution’. According to the programme, within three years, hundreds of thousands of British teenagers would require medication or hospital treatment for mental illnesses caused by excessive web use. This warning came from psychologists.
Innovative Research and Development
125
The programme launched a study into the web’s impact on our brains. Initial findings from 100 volunteers who were asked a series of questions found that most 12- to 18-year-olds gave their answers after looking at half the number of web pages and only one-sixth of the time viewing the information than their elders. The presenter of the BBC series, Dr Aleks Krotoski, said: It seems pretty clear that, for good or ill, the younger generation is being remoulded by the web. FaceBook’s feedback loops are revolutionising how they relate. There is empirical evidence now that information overload and associative thinking may be reshaping how they think. For many, this seems to be a bleak prospect – young people bouncing and flitting between a thoughtless, throwaway virtual world. (Krotoski, 2010)
Neuroscientist Baroness Susan Greenfield, Professor at Oxford University, told the BBC documentary that the web and social networking sites were “infantilising” children’s minds and detaching them from reality. All this emphasizes the need for publishers in future to recognise that there is not just a technical revolution taking place in IT. There is also a companion change taking place in the end user market. Their needs for what the publishers currently offer are changing. How do these conflicting and complementary social trends impact on the scientific communication process? Do they lead to greater democratisation of the industry’s process and infrastructure? They do suggest that STM publishers face an uphill task to readjust their portfolio of products/services to remain in tune with the evolving psychological nature of information ingest by the emerging generations of users.
Innovative Research and Development One example of where sociological changes are having an effect is in the way research is being conducted in the corporate world. The process whereby organisations such as IBM, Linux, Motorola, HP and Proctor and Gamble conduct their R&D efforts in-house is under scrutiny. In some instances, they now open up their R&D efforts to the community, to skilled users outside their organisations. This was unheard of when the earlier business philosophy reigned, and corporate secrets were protected assiduously. A collaborative approach has developed. These companies are exposing their formerly cherished software programmes and data for many outsiders to use, and in so doing improve their products and services at a fraction of the cost that it would have taken to develop new products in-house. It also enables speedier and more innovative developments of programmes, as the power of the community exceeds the power of a few dedicated in-house researchers. Google
126 Chapter 12
Psychological Issues
has been a classic user of this approach, offering its APIs for other organisations to apply to other datasets and create new ‘mash-ups’. Services such as Swivel show how the traditional approach to dataset dissemination by publishers such as OECD can be radically improved through the community refining the data (and achieving a six-fold increase in usage as a result). Transparency, the disclosure of pertinent information, is a growing force in the networked economy. “Coase’s Law” claims that companies will add new functions until such a time as the latest one becomes cheaper to outsource (Coase, 1937). The Internet has caused the costs of production to tumble, which increases the outsourcing potential for modern companies. Coase’s law explains the problems facing gigantic conglomerates who do it all themselves (General Motors, Ford). Open and free innovative services such as InnoCentive enable forward looking companies to share their product development problems with the 100,000 scientists around the world who participate in solving tough R&D problems, not necessarily for any mercenary reasons but because it is part of an open and free social network where benefits are to be found in solving challenges rather than solely making money. ‘Ideagoras’ have emerged as the systematic tapping into the global skills of highly skilled talent many times more than is available within the organisation. Commentators such as Surowiecki (2004) put it down to the “wisdom of the crowd.” Platforms for participation are now being set up by some of the more forward looking companies to invite the community, the crowds, to participate in product and service improvement. Organisations which currently use this social networking approach, notably Amazon but also Google, Yahoo!, eBay, will need to balance achieving commercial success on one hand and stimulating the interest, support and loyalty from their communities on the other. The business model is difficult to balance and in the early days of social networking has meant that innovative services have either adopted the advertising model to keep alive, sold themselves into larger ‘wikinomic’ organisations (Skype acquired by Microsoft), or have been beaten into submission (Wikipedia beat out Britannica; Blogger beat out CNN; MySpace beat friendster; craigslist beat out Monster). And all this happened in the mid 2000’s. The difference was that the losers launched web sites, the winners launched vibrant communities. Alliances and joint ventures are vestiges of the central planning approach – instead one needs free market mechanisms according to the ‘wikinomics’ approach (Tapscott & Williams, 2006). It is claimed that failure to participate in this new system will result in great upheaval, distortion and danger for societies, publishers and individuals who fail to keep up with this relentless charge towards developing interactive, collaborative communities. While the old Web was about web sites, clicks, ‘stickiness’ and
Collaboratories
127
‘eyeballs’, the new Web economics – a mere five years or so on – is about communities, participation and peering. It is the latter which is frightening some of the larger commercial publishers. Without control over peer review, without making money out of taking control of the IPR of publications, they are as nothing. And yet it is the exercise of such control which is anathema to social networking and wikinomics. Most technologists agree, however, that digital rights management (DRM) is a lost cause (due to hacker innovativeness) as well as being bad for business. Publishers don’t accept this – this is why Google, Yahoo! and YouTube are driving the industry. New business models to keep publishers in the loop need to be fleshed out. The ‘perfect storm’ which the integration of the above foretells suggests that the traditional ways of disseminating the results of research are no longer fit for purpose. There is a new and potentially large community which has hitherto been denied access to the key results through a combination of both inappropriate publication formatting and business models which restrict open access.
Collaboratories In many areas scientists are working in teams, sometimes remote from each other. The premise is that multi-author collaboration, across several institutions, produces better science than research done by a singleton operating within one institution. Science is a social activity, increasingly evident as Tweets, Blogs, Share, Mash and Tag proliferate and become part of the social milieu. Trusted parties working collaboratively on research projects become typical. Even so, research is still largely conducted in small teams whereas, in future, it is likely that in some research areas it will be dominated by very few large collaborative groups. For example, in the area of signalling gateways there are at most fifteen centres worldwide – eight in the U.S.A. and seven in the rest of the world – and these centres share their information among themselves. Another example is high energy physics with collaboration between the Los Alamos National Laboratory in the U.S.A. and CERN in Switzerland. A further example is in bio-informatics, with three global nodes in the U.S.A., U.K., and Japan sharing and updating research results identified in each territory. There is also crystals research – where chemists, computer scientists and informatics specialists all act in concert on a project in crystal research. It is claimed that future research in many fields will require the collaboration of internationally-based groups of specialist researchers, each group needing access to distributed computing, data resources and support for remote access to
128 Chapter 12
Psychological Issues
expensive, multi-national specialised facilities. Problems of geographic separation are especially present in large research projects. Time and cost for travelling, the difficulties in keeping contact with other scientists, the control of experimental apparatus, the distribution of information, large number of participants in a research project are just a few of the issues with which scientists are faced. The Internet has been the platform on which a number of services have been launched in order to facilitate better multi-institutional collaboration. The main initial goal was to provide tools for shared access and manipulation of specific software systems or scientific instruments. Such an emphasis on tools was necessary in the early development years of scientific collaboratories because of the lack of basic shared tools to support rudimentary levels of communication and interaction (such as videoconferencing). Nowadays such communication facilities are ubiquitous and the design of collaboratories may move beyond developing general communication services to evaluating and supporting the very nature of research activity in science. Collaboratories are increasingly becoming a feature of the research communication landscape, with specific tools being adopted that marry the geographical and multidisciplinary needs of the research group. It is a process that is unique to each discipline, lacking the generic and broad approaches typified by scientific publishing in the past.
International Collaboration on Research Publications According to an article by Christopher King (2012) in Science Watch which used data from the ISI database on journal articles, the numbers of scientific papers published with more than 50, 100, 200 and 500 authors reached a plateau from 2000 to 2003, then experienced a sharp increase in 2005. In 2012, each group reached its all-time highest levels. More than 750 papers with 50 or more authors were published in 2005, compared with a little more than 500 the previous year. Papers with more than 100 authors grew by more than 50%, from 200 to just over 300 in 2003, and to 475 in 2005. Papers with 500 or more authors increased from 40 in 2003 to 131 in 2005. This group saw the largest jump of all – a 200% increase – admittedly from a low base. The release of more recent figures from Science Watch has shown a further steep increase in the number of papers with authors in excess of 50, and a particularly notable spike in reports whose author counts exceed 1,000 and more. “Hyperauthorship” would seem to be flourishing – driven in particular by an international collaboration taking place in high-energy physics.
International Collaboration on Research Publications
129
Fig. 12.1 tracks papers indexed by Thomson Reuters for each year between 1998 and 2011. In 2010, as the graph shows, more than 1,000 papers surpassed the 50-author threshold, and the line continues to rise. In fact, all the groupings in graph display a notable surge from 2010 onward. This is particularly striking on the bottom-most line showing 1,000 or more authors per article. Aside from a few blips through 2009, this line was flat – until 2010, when Thomson Reuters indexed 17 papers with author counts above 1,000. The next year, 2011, this number increased eight-fold, with more than 140 papers registering above the 1,000-author mark. Multiauthor papers, 1998 to 2011 1200
Number of Papers
1000 800 600 400 200 0 98 99 00
01
02
03
04
05
06
07 08 09
10
11
Year >50 Authors
>100 Authors
>500 Authors
>200 Authors
>1,000 Authors
Source: Christopher King (2012) ‘MultiAuthor Papers: Onwards and Upwards’, Science Watch, Thomson Reuters, July 2012.
Fig. 12.1: Trend in multi-author papers
In order to assess the general makeup of recent multi-author papers, Thomson Scientific divided the papers with more than 100 authors into two main groupings: physical sciences and bio-medicine. The physical science group increased its volume by 144 in 2005 to total 393. (The majority of these papers were in physics.) Meanwhile, the number of bio-medicine papers with 100 or more authors declined from 41 in 2004 to 19 in 2006 (Fig. 12.2). As such there is a discipline feature to authorship patterns – there is no one size fits all in team authorship among the sciences. Physicists have embraced collaboratories more willingly than biomedics. It may be a feature of their subject, their culture or their traditional way of sharing information.
130 Chapter 12
Psychological Issues
Number of papers in the Physical Sciences and Biomedicine with > 100 authors, 2002 to 2011 600
Number of Papers
500 400 300 200 100 0 02
03
04
05
06
07
08
09
10
11
Years
Physical Sciences
Biomedicine
Source: Christopher King (2012) ‘MultiAuthor Papers: Onwards and Upwards’, Science Watch, Thomson Reuters, July 2012.
Fig. 12.2: Physical science versus Biomedicine with more than 100 authors per paper
The study also highlighted individual papers that had the highest number of authors. In 1987, a paper with 200 authors took the prize, and each year has seen the winning number increase. In 2000, the most-multi-authored paper had 918 contributors. The paper taking top honours in 2006 had 2,512 authors. This implies that the research article has taken on a different form and function. In a presentation made by Elsevier to a Higher Education Policy Institute conference in London in 2013, a chart (Fig. 12.3) showed the extent of international collaboration by several of the key countries, and how such collaboration has increased over the years (HEPI, 2013). All these statistics show that the amount of global collaboration among and between teams of researchers is increasing in many subject areas, and this reveals a growing openness and collaboration which is part of the process which could lead to more involvement from an educated knowledge worker sector. The doors of access to research are no longer as closed as they were in the printed paradigm.
International Collaboration on Research Publications
131
International collaboration - Share of UK publications vs. comparators 60.00%
2008
2012
Increase of UK international collaboration by 5.1% in 4 years
50.00%
Share
40.00% 30.00% 20.00% 10.00%
Country China
2008 2012 13.5% 14.9%
Germany France
42.5% 47.1% 45.2% 50.0%
United Kingdom
42.6% 47.7%
India
0.00% China
Germany
France
United Kingdom
India
Japan
17.5% 16.2%
Japan
21.9% 24.4%
United States
26.8% 31.2%
United States
Source: 2013 HEPI Autumn Conference on the Future for U.K. Research Policy & Funding, Royal Society, 14 November 2013
Fig. 12.3: International collaboration on science publishing
The psychological impact of the researcher’s mindset will have a growing impact on how research information will be formatted and disseminated in future. Early exposure to information technology is likely – though unproven – to drive new media appropriate to the Internet, and as such to impact on UKWs and their accessibility to such non-traditional publishing formats. How researchers as users of scientific information adapt to published information will be explored in the next chapters.
Chapter 13 Users of Research Output Researchers Interviews and focus groups have been conducted with individual researchers during the course of this and many other studies to see what effects the above trends are having on the way researchers identify sources, collect information and disseminate their research findings. The results show that there was considerable confusion about how scientific communication plays itself out. In a study entitled ‘What Authors Want’ (Swan, 1999), carried out in 1998/9, questionnaire responses were received from over 3,200 authors. The survey focused on what authors wanted out of the publishing process. A second study, ‘Authors and Electronic Publishing’ (Swan & Brown, 2003), was carried out in 2002. This received 1,246 replies. The focus in the second survey was on respondents’ views – both as authors and as readers – to both traditional and new forms of digital research communication. Authors were clear about their most important publishing requirements. 33% put ‘communication with peers’ first, and ‘career advancement’ came close behind (22%). Other objectives were much less important: ‘personal prestige’ (8%), ‘funding’ (7%) and ‘financial reward’ (1%). They were also asked how these key objectives should be achieved. The most important was by ‘communication with the widest possible audience.’ Joint second were ‘publication in high-impact journals’ and ‘the quality of peer review’. Next came ‘retrievability’ through abstracting and indexing services’, then ‘speed of publication’ and finally ‘enhancement of personal publications list.’ These studies, undertaken before the Internet had become a feature within scientific publishing, showed that communication of results was uppermost in the minds of authors. That was then. More recently a study was undertaken by Center for Studies in Higher Education (CSHE) at the University of California, Berkeley into scientific communication in seven main disciplines (Center for Studies in Higher Education, 2011). The CSHE study did not find that there is any evidence that ‘tech-savvy’ young graduate students are by-passing traditional publishing practices. In fact it appeared they fell in line with the norms and conventions of their disciplines and were therefore traditional in approach. The report mentions: Experiments in new genres of scholarship and dissemination are occurring in every field, but they are taking place within the context of relatively conservative value and reward systems that have the practice of peer review at their core. … We found that young scholars
Researchers
133
can be particularly conservative in their research dissemination behaviour. (Center for Studies in Higher Education, 2011).
This conservatism in scholarship behaviour can be both a curse and blessing to the various stakeholders seeking to establish a footing within the transition from a print to a digital publication system. Another recent study into the difficulties facing the user in accessing research output was undertaken by CIBER (Rowlands & Nicholas, 2011). In this study, journal articles were considered critical to discovery. They also found that, compared with other types of scientific resource, journal articles are relatively easy to access, especially in universities and colleges, but this generalisation disguises the fact that access to articles is patchy in some parts of academia. Overall, 11.5% of all researchers described their current level of access to journal articles as ‘poor’ or ‘very poor’. For university researchers, the proportion falls to only 5.4% but rises to 19.8% in the case of knowledge workers in small and medium-sized enterprises and 22.9% in manufacturing. The U.K. industrial sectors reporting the poorest levels of journal access included the motor industry, utilities companies, metals and fabrication, construction, and rubber and plastics, although, clearly, research takes place in these industries. Despite the existence of pockets of relatively poor provision, most researchers (71.5% in the case of universities and colleges, 57.6% in the case of industry and commerce) believed that access to journal articles has improved over the past five years, largely as the result of changes to business models by publishers, such as with journal bundling or the ‘Big Deal’, consortial purchasing by librarians, the growth in open access, and greater availability of information in digital form. In the CIBER study, nearly half (45.8%) of the researchers reported that they had faced difficulty accessing the full text of journal articles they needed on ten or more occasions over the previous twelve months (Rowlands & Nicholas, 2011). It is not possible to quantify the knock on effects of this ‘failure at the library terminal’. A spectrum of outcomes is possible, from mild frustration to more serious outcomes such as repeating an experiment unnecessarily or losing out on a grant. This is a worrying ‘known unknown’. The reasons for failure at the library terminal are varied, but a common theme running through much of the data was that users did not have access to a wide enough range of journal titles to satisfy all their needs immediately. They were also confused by a plethora of library, publisher and third party information systems: greater simplicity was required, and more open systems if researchers are to fully reap the rewards of considerable (but insufficiently joined up) investment in the infrastructure.
134 Chapter 13
Users of Research Output
There is also much confusion about licensing and particularly walk-in rights, especially for e-resources. Pay-per-view business models represented a major disincentive in accessing research publications. There was widespread reluctance to pay for individual articles at prices currently being asked for by publishers and document suppliers, and a minority of researchers (26.3%) claimed that they had strong objections in principle to this mode of access. Nevertheless there were indications of a substantial market for pay-per-view and that this could grow further if acceptable business models could be found. 12.6% of respondents to the CIBER survey say they might consider buying individual journal articles in the future, and this proportion rises to 43.8% in the case of conference papers. The issue of business models is taken up in chapter 24. Researchers are being pulled in different directions. One is the traditional route whereby reliance on the book and journal programmes from publishers is the approach which leads to international visibility, promotion, tenure and funding. Another route is through the social media and social networking that provide speed, openness, interactivity and spontaneity which are missing in the traditional system but which potentially lead to inaccuracies and a lot of ‘noise’ (which needs sifting). Still others look to data and text mining using robotics to meet their specific information needs. In a report by Weller and Dalziel (Weller & Daziel, 2007) it was suggested: If a user wants to find small courses to formally accredit their understanding of highland knitting patterns, the history of Sydney in the 1960’s, or anthropology among football fans, then most current formal providers will not meet their requirements, but a sufficiently distributed pool of user-generated sources might.
There are some researchers who already include social media and online communities as an alternative, or complement, to the journal publishing system. Although these have tended to be the younger researchers, ‘grubbing’ around for as much relevant information and visibility as possible, another view is that it is the established researchers at the top of the tree who have more time to experiment with new dissemination models. What is clear is that the same researchers could speak very differently depending on whether they are looking at it from a reader’s or an author’s perspective. The two are by no means synonymous.
Typology of Users Studies undertaken in the past apply typologies to individuals according to their ability to cope with published scientific information in whatever format.
Typology of Users
135
The profiles vary from the totally switched off to those who become gatekeepers, selecting, filtering and redistributing information on behalf of colleagues. The latter are the ‘mavens’¹ (see ‘Tipping Points’ in chapter 14, and (Gladwell, 2000) in references). Each individual comes to terms with the information challenge in their own way. Faxon Institute One of the earliest attempts to devise a typology of scientific researchers according to information habits was published by the Faxon Institute as a closed multi-client study in 1991/2. Faxon employed a Boston-based psychology consultancy to collect information on usage patterns among small groups of researchers in different disciplines using combinations of in-depth interviews, focus groups, self reporting of actions in diaries and workbooks, etc, (Faxon Institute, 1992). The disciplines analysed included chemistry, genetics, computer science, biology and astronomy. The main focus was to look at the impact which email and related communication services was having then – in 1990 – on the way such professionals operated. The results can be summarised in the typology of users in Tab. 13.1: Tab. .: Profiles of researchers (/) by Faxon Institute Information Zealots Classic Scientists Young Technologists
Information Anxious Older Teachers Product Researchers
%, Hyperactive consumers of all types of information from all sources. %. Moderate consumers of information, confident that they are ‘keeping up’ (% of chemists). %. Confident they read enough. Orientated to the acquisition of information from the desktop rather than the library. Have few information encounters. %. Acquire as much information as other types but are anxious to acquire even more. %. Readers of books, in libraries, but not online. %. Corporate R&D staff using email extensively but not visiting libraries
The interesting point is not the percentage share achieved against each profile but rather that it highlights different categories of researchers and how they relate to published information.
1 A ‘maven’ is a trusted expert in a particular field, who seeks to pass knowledge on to others. The word ‘maven’ comes from Hebrew, meaning one who understands based on an accumulation of knowledge.
136 Chapter 13
Users of Research Output
The constituents of the ‘Information Anxious’ category were similar to a category identified by the then Mercury Enterprises in their collaboration with the British Library in the mid 1990s – the latter pictured a world in which fully 50% of the researchers in the U.K. could be identified as being ‘Out-in-the-Cold’ (or OINCs) as far as the available information services were concerned. They had either found the information overload problem too stressful and switched themselves off, or were switched off by the system because they fell between institutional purchasing schemes. The ‘Classic Scientists’ and ‘Information Zealots’ categories represent the backbone of the scientific scene, yet in combination they only accounted for about a half of the community. The ‘Information Anxious’ in particular highlights one area where the information system is not serving the customer base adequately. Referring again to other parts of the Faxon Institute studies, self-perception of information competency among 600 researchers was measured by the psychologists conducting the research. One third of the respondents felt they read less than 20% of what they needed to in order to do their job well. Only 27% felt they read more than half of what was required.
SuperJournal A study was done ten years later in the U.K. – the ‘SuperJournal’ project (Pullinger & Baldwin, 2002). This was Jisc-funded with participation by publishers. It aimed to see what changes occurred as a result of electronic journals which were just becoming available, and how users reacted to them. A number of subject clusters were created, based on the responses received, with between 10 and 20 journals being included in each cluster. The clusters were around: – Communication and cultural studies – Molecular genetics and proteins – Political science – Materials chemistry Approximately 2,500 users were included in the survey. They formed the database from which a number of typologies could be ascertained. Two main types of users were identified – regular users and occasional users. Within these two broad categories a number of types were identified in terms of their use of the online services/journals (Tab. 13.2).
Typology of Users
137
Tab. .: Profile of users in SuperJournal project Regular Users
Enthused
Journal-focused Topic-focused
Article-focused
Occasional Users
Bingers Explorers Window shoppers
Frequent use of large numbers of journals (at least .), usually of the full text. Mainly social scientists. There were users in total. Very frequent use of specific journals, half their time being spent on the full text. were identified. Access titles once every six weeks or so. Use on average . journals. More social scientists than natural scientists. users in total. Access once every two months. Use only one journal, sometimes reaching into the full text. Mainly natural scientists retrieving known articles. in total. Used service for short period of time, intensively, and did not return. Used the online service extensively, making several repeat visits. Those who came into the online service, looked around, and then left. Mainly natural scientists.
Pew Internet and America Life Project Though more recent evidence is limited, a gross approximation of users by type was undertaken within the Pew Internet and America Life Project in 2008. The typology identified by Pew (Fox, 2008) is presented in Tab. 13.3. Tab. .: PEW categorisation of information users Omnivores. Connectors.
Lackluster Veterans. Productivity Enhancers.
Mobile Centrics. Connected but Hassled. Inexperienced Experimenters.
These represented about % of the US population. They enthusiastically used everything related to mobile communications technology. These represented % of the population. They tended to be older females and tend to focus on communication aspects of the new technologies. These were % of the population. They tended to use the Internet frequently but are less avid about cell phones. These were also about % of the population. They had strongly positive views about how technology helps them increase their productivity at work and at home. With % of the population fitting into this category, they fully embraced their cell phones but made little use of Internet. These were % of the US nation. They found all connectivity intrusive and information something of a burden. They often experienced information overload. These were about % of the US population. These casual users occasionally took advantage of the interactivity on offer.
138 Chapter 13
Users of Research Output
Tab. 13.3: (continued) Light But Satisfied. Indifferent. Off The Network.
These were % of the nation. They had some technology but it did not play a major role in their lives. They loved the TV and radio. These % of users proudly proclaimed that they did not like this technology, but they begrudgingly used it a little. These were % of the population. They had neither a cell phone nor an Internet connection. Older females dominated this group.
The enthusiastic and prolific users of the new technologies represented, again, less than half of the U.S. Internet users. There was then still a great deal of potential to increase the Internet and mobile communications user base in a well-developed economy such as the U.S.A. In the rest of the world the potential new demand could be even greater. The Pew results were updated in April 2014, and the recent position of Internet adoption by the U.S. population was summarised (Duggan, 2013). The ways in which people connect to the Internet are also much more varied today than they were in Pew’s 2008 survey. As a result, Internet access is no longer synonymous with going online with a desktop computer. In 2012, 88% of American adults had a cell phone, 57% had a laptop, 19% owned an e-book, and 19% had a tablet computer. About six in ten adults (63%) go online wirelessly with one of those devices. Gadget ownership is generally correlated with age, education, and household income, although some devices – notably e-book readers and tablets – are as popular or even more popular with adults in their thirties and forties than young adults aged 18–29. The rise of mobile communication technology is changing the story. Groups that have traditionally been on the other side of the digital divide in Internet access are using wireless connections to go online. Among Smartphone owners, young adults, minorities, those with no college experience, and those with lower household income levels are more likely than other groups to say that their phone is their main source of Internet access. (Source: Pew Research Internet project, April 13, 2012. ‘Digital Differences’ by Kathryn Zickuhr and Aaran Smith). In essence the above results indicate that there is still some latent potential within U.S. society to incorporate the Internet within their usage patterns. Knowledge workers come in all demographic types.
Ofcom The Office of National Statistics (ONS) commissioned a study conducted by Ofcom, the U.K. government’s telecommunications watchdog, in August 2014.
Typology of Users
139
This gave a picture of how the young generation have adapted to new communication technology within the U.K. According to Ofcom the advent of broadband in the year 2000 has created a generation of ‘digital natives’. “These younger people are shaping communications,” claimed Jane Rumble, head of Ofcom’s media research. “They are developing fundamentally different communication habits from older generations, even compared to what we call the early adopters, the 16-to-24 age group.” (Ofcom, 2014). Millennial children contact each other and consume entertainment differently from previous generations, and industry pundits now consider their preferences a better indication of the future than those of innovative young adults. The most remarkable change is in time spent talking by phone. Two decades ago, teenagers devoted their evenings to using home telephones. However, for those aged 12 to 15, phone calls now account for about 3% of time spent communicating. For all adults, this rises to 20%. Today’s children do the majority of their remote socialising by sending written messages or through shared photographs and videos. “The millennium generation is losing its voice,” Ofcom claims. Over 90% of their device time is message based, chatting on social networks such as FaceBook, or sending instant messages through services such as WhatsApp, or even sending traditional mobile phone text messages. 2% of children’s time use is spent emailing – compared to 33% for adults. Away from their phones, 12 to 15 year olds have a different relationship with other media. A seven day diary showed live television accounts for just 50% of viewing for this age group, compared to nearly 70% for adults. They spend 20% of their time viewing short video clips, for example on YouTube. Young adults aged 16 to 24 are active consumers of almost all media, devoting 14 hours and seven minutes each day to communications, if the time spent multi-tasking, for example texting while watching TV, is included. However, their use of radio and print based media has all but disappeared. “For years there has been a very stubborn resistance by the over 65s to accessing the Internet,” said Ofcom. “In the last three years we have seen that change and we think that’s down to tablets.” Britain is embracing Internet enabled devices across the generations, to the extent that the balance between sleep and screen based activities has now tipped. The typical adult spends eight hours and 41 minutes each day communicating or consuming media, including ‘oldfashioned’ books and newspapers, and just eight hours and 21 minutes asleep. Such change in behaviour patterns is critical at all ages, but particularly among the generation which is soon to feed into the nation’s research efforts. Not only will it affect the way academic researchers undertake their communication activities, but it will also set new demands (by UKWs amongst others) to
140 Chapter 13
Users of Research Output
modify the communications systems and infrastructure to allow them to become active players in research whatever their institutional affiliation.
Patterns of STM Use King/Tenopir In studies undertaken by Donald King and Carol Tenopir, the patterns of use of STM researchers over time have been monitored. The results are presented in Tab. 13.4. Tab. .: Understanding Patterns of Use Readings per researcher In academia In Industry From Journal Subscriptions
Year /
No. of articles
No. of Hours
Personal subscription
Library collection
% %
% %
Age of article read
Over % from journals over year old
Time spent on Research
Hours spent on all research activities
, per annum
Type of Document read
Format
University researcher
Non University researcher
Research journals Trade journals Professional books External reports/ grey literature Internal reports Other materials Total Disciplinary differences
Discipline
Medical researchers Paediatricians Engineers Social scientists and psychologists
Average number of articles read per annum –
E-journal usage was highest among physicists, biologists, and biomedical scientists Average readership per journal article is 900 (with a range from 500–1,500) Scientists are reading from more journals – at least one article from 13 journals in the late 1970s; one article from 18 journals in the mid-1990s; one article from 23 journals in the late 1990s
Patterns of STM Use
141
Tenopir and King based their data on responses from over 25,000 scientists, engineers, physicians and social scientists during the past over thirty years. Within this period the average number of articles read per university based scientist has risen from 150 (1977), to 172 (1984), to 188 (1993) to 216 articles between 2000 and 2003. This is not consistent across disciplines, as shown above. Tenopir/King also established that during the same thirty year period the average number of personal subscriptions which individuals took fell from nearly 6 to under 2. This was compensated for by access through library central holdings and the full range of library support services (Tenopir & King, 2000) The most popular method of article discovery by scientists was dominated by ‘browsing’ which varies from 48% to 56%, with an additional 20% plus coming from online searching of a topic. A colleague or other outsider intermediary accounts for most of the rest.
RightsCom A 2005 study undertaken in the U.K. by the consultancy, RightsCom, showed fewer differences between disciplines than some commonly-held assumptions suggest. It also indicated the impact of the U.K. national Research Assessment Exercise (RAE) on academics’ publishing practice and academics’ attitudes to alternative forms of publication as having a levelling effect. In terms of the single most essential resource, what stood out in the RightsCom study was the importance of journal articles for the medical and biological sciences; the importance of e-prints (pre and post) in the physical sciences and engineering; the broader mix in social sciences; and the importance of books in languages and area studies in the humanities. The responses also showed that, in 2004/5, a broad spread of discovery tools were in use, rather than total reliance on just search engines. Informal information-seeking is more likely to be face-to-face, by telephone or by email rather than using other technologies. Reading email newsletters was most popular among arts and humanities and social science respondents. Most respondents in three of the five groupings surveyed did not report problems in accessing research resources. Lack of reported problems was correlated with the availability of a research assistant to do some of the grunt work, and this was more likely in the hard sciences. However, in two of the five groupings, a majority reported problems and in the rest the minority was large. The main problems were in gaining access to journals, conference proceedings, books and databases, but groups also had problems getting funds to travel
142 Chapter 13
Users of Research Output
to access resources and also getting access to proprietary information. There was little difference across disciplines in relation to journal articles as a problem, but others were more discipline-specific. The second theme of the survey was in research collaboration and communication, informal and formal. The results partially confirmed some of the typologies described earlier, in that ‘harder’ disciplines were more likely to collaborate in the research process, and be prepared to use less formal methods to disseminate results, while ‘softer’ ones were more likely to communicate work-inprogress informally but rely on more formal means for dissemination. As might be expected, means of dissemination varied across disciplines, with some fields being more likely to produce monographs and others patent applications, software or exhibitions, than others. The importance of the journal article (in all forms – pre-print, post-print and final publication), book chapters and conference were shared across fields. There was consensus that journal articles had the biggest influence on RAE/REF scores. This emphasised the formal publication system is related to academic needs, and UKWs would not figure as determinants for future information infrastructures.
CIBER Research CIBER commenced operations in 2001 as a research unit at City University. It relocated to University College London in 2005, and then became an independent for-profit research operation in 2011. It recognised that researchers and users left a digital trail in their wake when searching for or looking at information on the Internet. This digital trail is a more accurate measure of usage patterns than that identified from small samples of questionnaire responses which is the basis of nearly all the above studies. The response from every click on the keyboard can be measured and used to interpret actual research activity. It gives robust and abundant data of actions actually taken, and does not rely on impressions or distant memory, nor is it dependent on aggregating from small samples to reach speculative conclusions. CIBER produced many reports in recent years about the digital user. CIBER opened the eyes of information professionals and publishers alike to what the Google Generation were up to in the virtual, unmediated digital information space. Information-seeking was fast, furious, abbreviated, and promiscuous; bouncing and skittering were the preferred forms of behaviour; viewing was preferred to reading; few people undertook advanced searching; and everyone used Google. Follow-up work showed it was not just the Google Generation, but also
Patterns of STM Use
143
virtual scholars that were behaving in ways not quite what librarians and publishers had envisaged when designing their websites and databases. The supposed orderly information-seeking and reading behaviour of the scholar had been transformed by the move to the digital world, the huge range of choices offered, and the data storm unleashed. Fast-bag pick-up (grab a PDF and get out quickly) and reading ‘lite’ became the order of the day as scholars developed new strategies for dealing with expanding choice and the unending data deluge and digital transition taking place, according to CIBER. These changes in behaviour are significant in creating understanding on how the UKW sectors of society can impact on the way scientific information is produced and packaged for consumption by an audience which is much more diverse and less specialised than those sitting behind university walls. Journals and books are ideal formats in certain circumstances – such as for gaining recognition, reputation or research funding – but there are other aspects of communication which are relevant to achieving greater democracy in science. These need to be considered in the light of the changing mindset and mentality of the wider scholarly community.
Public Engagement and the British Library An important source of information for U.K. knowledge workers is the British Library’s (BL) Reading Rooms both in London and in Boston Spa, particularly the Business and Intellectual Property Centre (BIPC). Admittance to these rooms is gained after registration of one’s details with the British Library including the need to provide evidence of identity. An analysis of the demographics of these registrants gives an indication of the social background of those who have selected the British Library as one of their information sources – a collection which is unique in its coverage and which caters for almost all eclectic interests and tastes. According to the latest available information of the 71,600 Reader’s passes which existed, 53% were students (which can presumably be seen as part of the ‘affiliated’ group), a further 15% were academic professionals, and 32% were a combination of business and personal users (or possibly ‘unaffiliated’). Overall, this proportionate split of 2/3 ‘affiliated’ versus 1/3 ‘unaffiliated’ differs from other data which give the tail of UKWs a greater share. The difference in the BL Reading Room distribution may be attributable to a number of unique features. Universities in London and the South East have easier local access to BL at St Pancras. Of the 31,600 passes issued, the University College London alone accounted for 886. All London based universities provided over 3,400 of the users.
144 Chapter 13
Users of Research Output
The geographic location in London means that only those with a strong dedication to accessing BL’s holdings will make the journey into the city (albeit there is a Reading Room in BL’s less accessible Boston Spa, Yorkshire, operation). 41% of the passes are issued to London-based individuals, 46% are nonLondon based, and 13% are from overseas. Though not as dominant as with other demographic measures, there is some evidence that, even with BL’s unique position as a centre of scientific resource excellence, there is still a ‘long tail’ in evidence. Even with the BL data, there is suggestion of some demand being created and served for individuals not within the formal educational system.
Cultural Considerations Research studies described above highlight the generic trends in scientific information usage online. More significant is the culture of the discipline within which the researcher operates, and the degree of protection over research results which may be involved. However, there are also the individual’s circumstances – personality, willingness to adapt to change – which are determinants at the grass roots level. The culture of the discipline is being modified in many areas by the emergence of a networking culture which puts cooperation and collaboration in prime spot. Adaptation to the working practices of researchers from other areas, and in some instances, the activities of a general public, has become a feature of the research process. This is particularly the case when the research project involves crowd sourcing for material among ‘citizen scientists’. The process of organising many researchers from all walks of life and from around the world requires an organisational skill greater than that under the alternative ‘small science’ research practice. If organised efficiently, it brings in a wider community to focus on a research topic and in so doing attracts unaffiliated knowledge workers as well as the affiliated – especially where a diverse skill set proves valuable in tackling the problem. Such cultural impact is affected by the new platforms which have arisen in recent years, enabling the enfranchised knowledge workers to choose time and place for them to access information sources. The arrival of mobile phones, smart phones and tablets provide the means for time-switching the search for information to meet with their schedules. There is evidence offered by the European Commission funded project ‘Europeana’ which shows that there is a marked difference in time of day, and day of week, at which mobile users access the cultural database as compared with access through fixed platforms (PC, desktop computers).
Disciplinary Differences
145
Questions are being raised about how trustworthy, and with what authority, the current means of communication within the scientific world takes place. Particularly so when the participation, through cooperative networks, might involve bringing in Unaffiliated Knowledge Workers to operate alongside the Affiliated. Can the organisation of the network cope with ensuring the common procedures, standards and acceptable practices are adopted by all participants in the network? What sanctions can be imposed to ensure consistency and quality? How much trust and faith is there in both the established – print journal based – system and the new informal digital communication systems by those who are actual users, and perhaps more significantly, those who are still latent and potential users? What judgements are used to decide how much credibility to give the forms of published output? Some of these issues were addressed by an investigation funded by the U.S.- based Sloan Foundation and including the University of Tennessee, CIBER, and a number of publishers. The study involved assessing behaviour patterns and wishes among scholars and was completed in November 2013 (Nicholas, 2014). A basic question is to what extent researchers at the frontline of research in academia and the corporate world are prepared to open up their research activities to those who do not have the experience or qualifications which are entry requirements to be part of the team. Whether a broader platform of research provides sufficient additional benefits to the research effort to compensate for the additional work in creating a network of participation and ensuring that it operates under acceptable quality conditions.
Disciplinary Differences As indicated above there are significant differences in information requirements among scientists according to demography, age and particularly by scientific subject. Natural scientists differ from the life scientists who in turn are different from technologists, and all differ from social scientists and humanities in their respective needs and search for research information. The split between the main disciplines in terms of physical output of research findings can be seen in Tab. 13.5. Another way of segmenting the STM market for publications is shown in Tab. 13.6.
146 Chapter 13
Users of Research Output
Tab. .: Output and percentage of research papers published by U.K. researchers, Publication Life Sciences
Biomedical research Biology Health sciences
Engineering/Technology
Engineering
Natural Sciences
Physics Chemistry Mathematics Earth & Space sciences
Social Sciences and Humanities
Social sciences Arts and Humanities Psychology
Other professional fields TOTAL
, , , ,
.%
,
.%
, , , , ,
.%
, , , ,
.%
,
.%
,
.%
(Source: Research papers published by U.K. academics in 2010 as included in ISI’s Web of Science). Tab. .: Alternative breakdown of STM publications Life Sciences
Clinical Medicine Biomedical Research Health Biology
.% .% .% .% .%
Physical Sciences
Physics Mathematics Chemistry Earth and Space
.% .% .% .% .%
Professional and Engineering
Engineering and Technology Professional Psychology
.% .% .% .%
Arts and Social Sciences
Arts and Humanities Social Sciences
Source: Nature, 16 July 2012
.% .% .%
Disciplinary Differences
147
Each subject discipline has its own characteristics. One example of a subject discipline which is unique is described below.
Physics The global physics research community has and is in effect ploughing its own furrow. For many years – even preceding the arrival of the Internet – physicists voluntarily deposited electronic versions of their manuscripts in a freely-accessible database known as arXiv, now run from Cornell University Library and with cofunding from the Simons Foundation. Though it is still free-to-use, it is not – despite the voluntary support it receives from the physics research world – free to create and run. The annual costs of running arXiv are estimated at $800,000. It has benefitted in the past from grants provided by the National Science Foundation (NSF) to its creator, and more recently received protection as a result of being incorporated within Cornell University’s information systems. It is even being suggested (at the time of writing) that a subscription service be established to help defray ongoing costs of the service (see Chapter 25 on Open Access). In addition, the physics research community worldwide is cooperating on a scheme to enable core journal titles to be made available under an international licence arrangement. Instead of individual libraries each paying subscriptions to the main individual physics journal publishers, physics libraries would combine their buying power by reaching one universal licence agreement with the journal publisher to allow its material to be made freely available. SCOAP³ (Sponsoring Consortium for Open Access Publishing in Particle Physics) involves the purchase of journal subscriptions to support the peer-review service.
Other Subject Areas As referred to earlier in this chapter, Professors Donald King and Carol Tenopir have undertaken many studies over the years to assess differences among the disciplines primarily in terms of how they adapted to published material in a print-derivative form. There are six main areas in which change has manifested itself according to Tenopir, some of which relate to scientific information overall, some to specific disciplines. The amount of reading is going up, but the time spent on reading an article is going down. This is based on self-reported data going back to 1977. The average number of articles read has gone up from 150 in 1977 to 271 in 2006. However, the average time reading each article has gone down from 48 minutes in 1977 to 37
148 Chapter 13
Users of Research Output
minutes in 2004/6. Tabs. 13.7, 13.8 and 13.9 indicate the diversity between the different disciplines. Combining the tables shows that the overall amount of time spent reading articles has gone up over recent years. Tab. .: Number of articles read per annum Subject Area Medical Science Engineering Social Science Humanities
Faculty
Students
Tab. .: Time spent reading an article Subject area
Time spent on reading an article
Medicine Science Engineers Social Science Humanities
minutes minutes minutes minutes minutes
Tab. .: Proportion of articles read by respondents Subject Area Medical Science Engineering Social Science Humanities
University College London
University of Tennessee
% % .% .% .%
% % % % %
As indicated in the above tables there is a huge difference between the reading habits (or needs) of the medical research sector as compared with engineers, and particularly the humanities (where book reading becomes dominant). Tenopir’s results also suggest that engineers spend nearly twice as much time on reading an individual article than the medical staff, but read far fewer articles in total. Among the other findings were the following: – Different purposes. Using the critical incidence technique whereby the last use made of the information system by a user was analysed in detail, Tenopir found that 50% of the usage was for research purposes and 20% for
Disciplinary Differences
–
–
–
–
149
teaching, though the actual proportions by individuals varied according to responsibility, status, age and job titles. Repurposing of information is increasing. There is a rise in granularity. Current awareness (or ‘browsing’) needs are met by accessing the full journal or issue. The level of granularity is the journal. Search needs are met at the individual article level. This is both for new research as well as for writing. Specific item needs can be met from a part of an article, a table, an image. The level of granularity here is even finer, at the sub article level. Greater reliance being placed on relevant aids. Help is wanted to get the user to the best possible source for the required information. Abstracts are increasingly important, as demonstrated by work Tenopir did for the U.S. paediatrics society, and CIBER has shown in its researches. Thirty-three per cent of readings are just of the abstract (CIBER) and one third of paediatricians rely only on the abstract. There is wider readership from a greater variety of sources and types. In 1977, the average number of journals consulted was 13 titles. By 1995 this had grown to 18 journals and by 2003 it has become 23 journals. Also there has been a greater amount of multidisciplinary readings. Whereas browsing from a wide collection of material has declined over the years to 58%, specific searching for information has grown to 28%. Citation linking is used but not as much as was expected. There are more readings from older material. The split between reading articles up to two years old, compared with articles older than two years, is 50:50. Here it is the stage the user is in their career, rather than just age itself, which is a key determinant. Older faculty (over 36 years) spend more of their time reading printed material and less using screen-based systems. Browsing is even more important than searching in looking at older material.
Statistical procedures used in developing the above behavioural patterns, particularly a more recent claim by Tenopir that ‘scientists are reading less’ have been questioned. The suggestion is that scientists had not only peaked in their reading of articles in 2012, but that their readings have actually fallen. This was challenged by Dr Phil Davis, an independent U.S. consultant (Davis, 2014). He questioned the way interpretations were made from limited and non-comparable surveys. The statistical significance was questioned given the low level of responses. That aside, it is clear there is a need to accommodate different reading patterns according to different disciplinary habits, traditions and needs. There is no single generic approach which can be adopted in meeting all scientists’ information needs. The other main point which Tenopir notes is that the growth rate in reading patterns is unsustainable. This puts pressure on information pro-
150 Chapter 13
Users of Research Output
viders to develop value-added features in their services which address the time constraints that users face. There is a growing barrier on effective information use because of competition in time with other aspects of their business and private lives. This is as relevant to the ‘affiliated’ academic users within academia as it is to those ‘unaffiliated’ knowledge workers outside the academic garden walls. There is no one size which fits all in scientific communication – each discipline, each subject within each discipline, will need to be looked at separately, and the differing habits of each individual discipline also has to be understood. The main STM conclusion is that the policies of research funders need to be informed by an understanding of the practices within the research communities if information exchange is to be optimised. The same could be said for all STM publishers – they also need to be aware of the variety of information channels opening up for the diverse groups which constitute the scientific research communities they serve.
Socio/Technology Interface Comments in a blog from Professor Keith Jeffery, Director, International Relations, at the Science, Technology Facilities Council (STFC), focuses on the way technology and usage are beginning to interact: Technologically the user interface level is likely to be semantic web/linked open data driven from below by a formal information system with very rich metadata and services linked as business processes for research discovery, analysis, manipulation, mining and communication from the researcher ‘workbench’. Interaction by speech and gesture rather than mouse and keyboard will become the norm. Text and data mining, enabling new ideas and relationships to be found from widespread access to research material on different platforms, is critical in some biomedical areas in particular. Access to vast data resources is also becoming a feature of many natural science areas. Use of cloud technology to capture, store and share information is growing. Collaboratories, involving teams from across continents sharing and exchanging research results, are springing up in many biomedical and physics areas. (Jeffrey, 2012).
The social dynamics emerging within the scientific community was summarised by Jeffery (Jeffrey, 2012), again on the GOAL list serv: A new generation of researchers is entering the system. They live in the Web 2.0 generation now and will evolve with whatever comes next. They are still impressed by what can be done with http, html/xml and URLs; they don’t imagine a world without them. They will expect immediate interaction with hyperlinked multimedia. They have little or no respect
Market Research
151
for legalities or long-established traditions. Glory is counted by ‘likes’ or ‘friends’ and – as an aside – is more quantitative and reproducible than existing ‘glory’ metrics (especially the dreaded impact factors and related indices). They may want peer review as it is now but seem to manage the rest of their lives using online recommendations from peers they either know or respect or both. They will certainly expect to live in a research world with wider conversation including social/economic/political commentators which links with the outputs – outcomes – impact agenda.
A similar point was made in the book ‘Opening Science – The Evolving Guide on How the Web is Changing Research, Collaboration and Scholarly Publishing’ edited by Sönke Bartling & Sascha Friesike: Researchers all over the world use modern communication tools such as social networks, blogs, or Wikipedia to enhance their scientific expertise, meet experts, and discuss ideas with people that face similar challenges. They do not abandon classical means of scientific communication such as publications or conferences, but rather they complement them. Today we can see that these novel communication methods are becoming more and more established in the lives of researchers; we argue that they may become a significant part of the future of research. (Bartling & Friesike, 2014)
Market Research Despite the above there appears little commitment to adopting sophisticated market research techniques in reaching an understanding of the changing needs, cultures and sociology of particular sectors within scientific publishing. New services have been launched on a whim and a prayer rather than through systematic collection of quantified market evidence. There is much initial ‘hype’ in launching services which, on paper, might appear laudable and viable but in practice miss out on pushing the right buttons to successful new product implementation. The number and range of buttons which need to be pushed increases in line with the impact of the perfect storm. Human, behavioural motivations tied in with questions about estimates of market sizes and numbers of practitioners in relevant areas, have been assumed rather than identified. Professional approaches to market research and product investigations have, with few exceptions, been missing. Studies on user behaviour of researchers have been conducted under traditional, legacy, print-based conditions. These give few clues how researchers’ behaviour would be in a fully digital environment. New studies would be necessary which test researchers information gathering activity under emerging conditions. Studies focused on knowledge workers would be useful – these do not currently exist. The use of historical documents, and the critical analysis of
152 Chapter 13
Users of Research Output
these published studies, do not give a true reflection of the current and future scene. They are at best indicators of a past era. Nevertheless, it has been shown that there is great diversity between disciplines as far as user behaviour patterns are concerned. Each discipline has its own culture, and a one-size-fits-all solution is unrealistic. Even within the same discipline there are sub-cultures with different information needs. There are variations according to discipline, age, social media adoption, computer awareness, geography and size and dispersion of research team. It has also been shown that there is a range of profiles and typologies for researchers. Studies undertaken on information usage among researchers have emphasised this diversity. This fragmentation acts against the traditional role of research journals as ideal delivery mechanisms – more targeted approaches, aimed at specific needs of small groups of researchers with similar mindsets may be more appropriate than a packaged journal in a digital era. Digital technology allows such new services to be developed at acceptable costs. Moving away from generic information services towards targeted, customised services is gaining momentum.
Summary The speed with which the above trends/developments have come about has left an impression in many quarters that the current methods of disseminating results of public-funded research effort are no longer ideal, as commented earlier. This impression has been reinforced by major studies which refer to current scientific publishing as being ‘dysfunctional’ (see for example, UK House of Commons Science and Technology Committee ‘Scientific Publications: Free for all?’ published in July 2004). More recently Gowers (2012), Horton (2012), Monbiot (2012), Nielsen (2012), and Weller (2012) have all commented on the weakness of the publishing system, and lack of leadership from publishers in migrating to a digital environment. It has led to a feeding frenzy by those who criticise the role which large commercial journal publishers have in locking-in access to publications to a privileged few readers. All the challenges referred to in the past chapters – the Internet (and digitisation), social change (and social media), and resourcing (supply versus demand) – are bearing down on a scientific system which has changed little over the past decades and centuries. This changing environment, or ‘network weather’ (Greenfield, 2010) has to be taken into account when assessing whether knowledge workers can be drawn into the future scientific communication process. Network weather is all the uncontrollable developments which come into play when focusing on a specific
Summary
153
set of tasks. They are not forecastable, but can either upset or facilitate a set of actions which a researcher hoped to undertake. They are the real context or environment within which research operates, and past studies on user behaviour are at best approximate indicators. One of the drivers for change is the social context in which researchers operate. A feature of the current Information Economy has been the rise of social media as a scientific communication system. The following chapters will comment on the impact of developments in social media and social networking with the intention of seeing how these will affect knowledge workers in general in their participation within the wider research network.
Chapter 14 Underlying Sociological Developments Introduction It is contended that the rise of social media and social networking has been dramatic within the consumer and entertainment areas of UK society during the past decade. It has changed the information profile and activity of consumers worldwide, and remains to be seen how this spills over into the research area. This is particularly the case among the younger generations who have had an early start in adapting to the digital world. In Academia/Corporate (the Affiliated) Editorial (creation)
Commercial (purchase)
Research Centres
Research Libraries
In the professions (the Unaffiliated)
Professionals, SMEs, independent consultancies, engineers, policy makers and funding agencies
In Society (the Unaffiliated)
General public, amateur scientists, patients and healthcare, lobbyists, etc
Fig. 14.1: Three levels of interest in scientific publications
‘Tragedy of the Commons’
155
The next two chapters focus on the social milieu. The first identifies some relevant concepts. Specific social media developments and their impact on research are described subsequently. Together these two chapters emphasise that scientific research and its dissemination will witness further significant changes within the next five years, and these changes will bring the needs of unaffiliated knowledge workers in scientific information dissemination to the fore. There are broadly three levels of interest in scientific information within society, illustrated in Fig. 14.1. Whereas level one has been covered by the current publishing business model, it is levels two and three which have so far been neglected by the scientific information industry. Conceptually, there are some issues which dictate how the business sector comes to terms with the social trends underpinning the two neglected levels. The tragedy of the commons addresses this issue by indicating that there are limits which operated within the traditional publishing business and that many of the commercial policies now being applied were in response to these traditional attitudes. They may not be appropriate as the world changes from printed text to digital.
‘Tragedy of the Commons’ The concept of the ‘Tragedy of the Commons,’ as described by Hardin (Hardin, 1968), involves a conflict over resources between individual interests and the common good. It comments on the relationship between free access to, and unrestricted demand for, a finite resource. The term derived originally from a situation identified by William Forster Lloyd in a book on population (Lloyd, 1833), and was then popularised and extended by Garrett Hardin (Hardin, 1968). Under the ‘Tragedy of the Commons,’ public common land would in medieval times be grazed on until such a time as one extra beast tipped the scales and made the ‘common land’ totally useless for all. It would reach a stage where nothing could survive on the commons. All the herdsmen would suffer not just the owner of the last beast added. This collapse would happen quickly, was comprehensive and was irreversible. The unwritten assumption by critics of the scientific communication process as it existed up until the 1980’s was that publishing was headed in the same direction – that at some stage the collective library budget, the source for purchasing most scientific literature, would be insufficient to cope with the everexpanding, individually produced (or published in journals) research output.
156 Chapter 14
Underlying Sociological Developments
The system would self-destruct dramatically and quickly under the strain. As new media and new versions of existing publications emerged the stresses would be ever greater and the collapse of the STM publishing system more imminent and catastrophic. The ‘Tragedy’ conceptualises, in a way unintended by Hardin, the problem facing the pre-digital publication system in which publishers produced books and journals on an uncoordinated basis. They took no account of the fixed resource – the library budget – from which their output would be purchased. The economics of publishing is driven by the research output produced from an expanding R&D effort by society. This was the crux of the commercial challenge facing publishers and librarians in the printed era (see ‘the serials crisis’ in chapter 7). The ‘Tragedy of the Commons’ did not happen. The switch from a printbased publication system to a hybrid and increasingly digital one has created solutions which have given flexibility to the buying system, and enabled more information to be acquired without causing the library system to collapse. ‘Big Deals’ are an example. But it does indicate that there was something inherently flawed with the traditional mainly serial-based publishing system which perpetuated a distinction between the forces of demand and supply. The two forces were uncoordinated. There was a disaster waiting to happen. Any suggestion that these were halcyon days of scientific publishing is a myth. Something had to change, and the Internet has offered new options to meet the demands of scientific research. The Shirky book on ‘Cognitive Surplus’ comments on various studies (many built around Game Theory), suggesting that there is inbuilt within society the willingness to share rather than be selfish (Shirky, 2010). Sharing of common resources is better accomplished when all participants act without having to be exploited. There is a market mechanism which is behind this but it is not destructive or aggressive but rather one which treats individuals as non-exploitable items. In addition, the new digital age pushes out the boundaries from which the ‘Tragedy of the Commons’ would take place for information. The finite resource is being extended as the Internet, with almost limitless capacity, makes scarcity less of an issue. This has heralded in the option for change to occur much more readily than when the fixed (paper-based) publishing system operated. But for such change to occur requires other sociological factors to be in place to stimulate demand for new options. It requires organisations, institutions or individuals to create a ‘tipping point’ to enable change to occur and take root.
‘The Tipping Point’
157
‘The Tipping Point’ In 2000, Malcolm Gladwell highlighted the fact that – as with the ‘Tragedy of the Commons’ – there is not always a smooth and gradual transition from one market mechanism to another (Gladwell, 2000). Several fundamental issues arise when considering how change happens. Will new media formats have to follow a particular route in order to become established? Can adoption of more efficient means of scientific communication be brought about quickly and effectively, reducing the cost to society of perpetuating old, inefficient systems? According to Gladwell in his book ‘The Tipping Point – How little things can make a big difference’, new innovations do not necessarily occur or take hold for logical reasons. Changes or ‘tipping points’ can be stimulated by a number of events. Some of these are informal and subjective which defy the norms of efficiency. In fact, according to Gladwell, successful ideas, products and messages behave as ‘epidemics’ or ‘viruses.’ They become contagious. Even the smallest of factors can have a profound effect on creating a virus, and this can happen quickly. After the ‘tipping point’ is reached, subsequent progress as a result of the virus taking hold occurs geometrically. Gladwell claims that there are three rules which set off an epidemic: The first is the ‘Law of the Few’ – which means that a few key individuals can have a significant influence on creating change. They are connectors, mavens and salesmen. Connectors know lots of people – the average personal contact network is claimed to be twenty-one people (see later), but connectors tend to know many more. They are creators and sustainers of a wide personal communication network. They are on first name terms with movers and shakers in the industry. Mavens, however, are people who are very well informed and share this knowledge willingly. Maven is someone who accumulates knowledge. They keep the marketplace honest. They are not persuaders – they are teachers. Finally, salesmen have the skills of persuasion. They tend to be subtle in their approach. Their arguments cannot be resisted. These different ‘people types’ can have a strong influence in effecting a ‘tipping point,’ in making a new information product or service successful. Some combination of the three types is required to get things moving, to effect a change or for an innovation to succeed. There are some obvious candidates who we see as connectors, mavens and salesmen in the current controversies over aspects of scientific communication – notably in the area of Open Access (OA) adoption. People such as Professors Stevan Harnad (University of Southampton) and Jean Claude Guedon (University of Montreal) figure as participants in the ‘Law of the Few.’
158 Chapter 14
Underlying Sociological Developments
The second rule of the epidemic is to create ‘stickiness.’ There has to be quality in the message. The elements of ‘stickiness’ in the message may be small. There is usually a simple way to package information that can make it irresistible. For electronic publishing, this can be tied to technological progress within society leading to a ‘better’ information service – a key ‘stickiness’ factor. At stake is whether the additional outreach achieved for scientific publications is in fact cost effective for society; and whether the message for greater openness is in itself a sufficiently powerful stimulant for change in social information gathering habits. The final epidemic creator is the power of context. Epidemics are sensitive to prevailing conditions. Getting an epidemic started involves a different set of human profiles – innovators, early adopters, early majority, late majority and finally the laggards. The first two are visionaries and risk takers, whereas the early majority avoid risks and are pragmatists. There is a chasm between the two. This is where the connecters, mavens and salesmen have a role in generating the epidemic. They translate the message from the first group to the second. The point is that there is a social mechanism behind a change in acceptances and attitudes within society. This is as relevant in electronic publishing as elsewhere. It means that technological efficiency by itself is not enough. Some of the ‘tipping point’ ingredients, the social mechanisms, are necessary. Have the more significant aspects of electronic publishing achieved ‘tipping point’? It would appear some have, whereas some still have some way to go. Several studies have been undertaken which suggest there was a radical change in user behaviour some ten years after the commencement of the electronic publishing revolution. But it was not an evenly distributed progress. Some parts of the electronic publishing still need the connectors, mavens and salesmen to be more active – for example many of the recent author studies show that, despite the claimed advantages for authors in having their articles published in open access format in institutional repositories, as many as 90% of authors are still unconvinced. ‘Tipping point’ issues have not yet taken hold within the research author community. This is relevant for knowledge workers generally as there is a high degree of fragmentation within the sector, and each fragment has low impact or intensity of involvement in STM research. Ensuring that epidemics take hold in each and every sector of STM publishing society may prove to be a long process. The challenge facing new technology adoption is compounded by another social mechanism – that of the ‘Long Tail’. This takes into account that there may be a few people who accept the change as a result of the ICT (Information
‘The Long Tail’
159
and computer technology) revolution, but it is the mass of people who are not in the forefront of developments who also need to be convinced.
‘The Long Tail’ Chris Anderson, editor of the Wired magazine, unleashed a global debate with an article entitled “The Long Tail” originally published in October 2004 (Anderson, 2004; 2009a). The ‘long tail’ is the huge portion of content that is thought to be of residual value to companies catering for mass audiences. He claimed that this residual portion of a demand curve (see Fig. 14.2) is both powerful and profitable. It opens up new audience potential which needs to be considered in addressing business models, including those for scientific communication. Experiences from organisations such as Google and Amazon show how important the ‘long tail’ of users is and how profitable the aggregation of the tail can become. The ‘long tail’ refers to the hundreds of thousands of products that are not number one bestsellers. It is all those products that form a tail to an organisation’s sales activities. In the digital and on-line world, these products are now booming precisely because they are not constrained by physical retail space, as in the pre-digital age. What once had to be stored and accessed from buildings and shelving now lives on in a computer memory and can be retrieved quickly, easily and inexpensively using online systems. The term ‘the long tail’ has caught on in technology and media circles worldwide. Anderson says that in an era of almost limitless choice, many consumers will gravitate towards the most popular mass-market items, but just as many will move toward items that only a few, niche-market people want. For example, with music, many buyers want the hot new releases, but just as many buy music by lesser-known artists or older music – songs that record stores never would be able to carry because of space constraints but that can now be offered online. All that small-market, niche music makes for, in aggregate, a substantial demand. Until the past few years, mass-market entertainment ruled the industry. In this new digital era, the ‘long tail’ is the new and powerful force, and the tail exceeds the core markets in many instances. In a later book on ‘Free, the future of a radical price,’ Anderson (2009b) comments on the experience of retailers of digital products who found that 98% of their extensive inventory of products was bought at some stage. The past and current focus of the consumer entertainment industry is on creating ‘hits’ – producing the few items which sell more dramatically than the average. The same applies to publishing. But this is only relevant in an era of scarcity.
160 Chapter 14
Underlying Sociological Developments
With unlimited supply of the digital versions, earlier assumptions about the relative importance of hits come into question. When physical storage space no longer costs anything, the infrequent or non-purchasers become a market in their own right. Latency is breached by technology in making niche products available for niche markets in a way which challenges the dominance of a ‘hit’ focused culture. “Amazon has found that 98% of its top 100,000 books sell at least one a quarter” (Anderson, 2009b). Perhaps more revealing is that a further one-third of their sales come from titles not in the top group, suggesting that the market for books not held by an average bookstore is already one third the size of the existing market. “Apple has said that every one of their 1 million tracks in iTunes has sold at least once”. “Netflix reckoned that 95% of its 25,000 DVD’s …. (was) rented at least once a quarter”. These experiences, quoted by Anderson, show the power of the ‘long tail’ in changing the business paradigm and raising the spectre of the ‘long tail’ as a crucial market opportunity (Anderson, 2009a).
Popularity
The New Marketplace
Head
Long Tail Products Fig. 14.2: Theory of the ‘long tail’
Some industry observers claim that scientific and scholarly research information displays many aspects of the ‘long tail.’ On the supply side, there are a handful of large commercial and society publishers but these are being complemented by thousands of smaller publishers which together make up a significant part of the market. On the demand side, users of published information are mainly in the university and corporate research centres worldwide, but they are
‘Freemium’
161
potentially equally matched (and exceeded in number) by trained and educated professionals in wider society. Anderson claims there is still demand for big ‘cultural buckets’ of hits (subscription-based journals for academia), but this is no longer the only market. The hits now compete with an infinite number of niche markets. The niches are not ‘misses’ – they are ‘everything else.’ The mass of niches has always existed, but as the cost of reaching them falls – consumers finding niche products and niche products finding consumers – it suddenly becomes a cultural and economic force to be reckoned with. Keeping the niches alive is a much easier task in the digital world than it ever was in the print-based publication era. Traditionally we have lived with the 80:20 rule. However, current online systems mean that 98% of items are used. The ‘Tragedy of the Commons’ indicates that a change in the market for scientific publications becomes a feature in an Internet world, and that there may be forces around which could ‘tip’ the existing and new user of scientific information towards more digitally-deliverable products. These digital products could have an ever-growing market niche in an Internet environment through being part of the ‘long tail.’ The questions then can be asked whether this phenomenon should become an important element in the assessing the total market potential facing published research; whether the many niches should be taken into account, not just the core academic library market; and whether the tail should be included in the modelling of scientific publishing’s business strategies. UKWs belong within this ‘long tail’ of publishing. As described by Brynjolfsson, Hu and Simester (Brynjolfsson et al., 2007), We find consumers’ usage of Internet search and discovery tools, such as recommendation engines, are associated with an increase in the share of niche products. We conclude that the Internet’s Long Tail is not solely due to the increase in product selection but may also partly reflect lower search costs on the Internet. If the relationships we uncover persist, the underlying trends in technology portend an ongoing shift in the distribution of product sales.
‘Freemium’ In the digital marketplace the most effective price is no price at all, according to Chris Anderson in his book ‘Free – the future of a radical price’ (Anderson, 2009b). He cites a number of examples where novel business models are being applied which include cross-subsidies (giving away a digital recording to sell a tv cable service) and ‘freemiums’ (offering Flickr for free while selling the superior FlickrPro to more serious users). The key point being that the main product or service is made available for free.
162 Chapter 14
Underlying Sociological Developments
Though piracy has dogged the music industry in recent years, the link between the record and related concerts, appearances and merchandising shows that new sources profits can be made from events other than selling the music CD. It is the spin-off events which generate the money – everything including endorsements and advertisements, from local gigs to international tours and performances. New media models have allowed successes such as Radiohead’s name-your-own-price experiment with its audio album. It was claimed by Anderson that individuals below 30 years of age will not pay for information. They are more aware that it will be available somewhere for nothing. Anderson highlights the mental transaction costs that differentiate zero from any other price, suggesting that a zero price should be considered for the main product and ‘freemium’ pricing for related premium products. Zero pricing for the base research article would inevitably open up the market for the item. Such a pricing model would only be sustainable if there were sufficient interest for the premium products for which payments would be required. This is a drastic course but one which would bring in knowledge workers as information users, not necessarily as buyers. Adopting a ‘freemium’ pricing policy is a big ask for a traditional, conservative and protectionist STM journal publishing sector. A sector which goes to great lengths to protect its lucrative corporate revenues with its journal subscriptions/licences.
Corporate Financial Structures An ongoing shift in product sales can also be the result of another socio/economic trend. As organizations get larger they seek competitive advantage from economies which can be exploited from their size. This is just as much the case in scientific publishing as in other industrial and commercial sectors. With scale or size an organisation can reduce the unit costs of doing business by sharing the organisation’s overheads or fixed costs across a greater number of unit outputs or products. It can also reduce ongoing or variable unit costs of products by negotiating better purchasing deals with suppliers. Increased size can also translate to ease and lower costs of getting access to finance and investment capital. “Economies of scale are the cost advantages that enterprises get from size, output, or scale of operation, with cost per unit of output generally decreasing at an increasing rate as fixed costs are spread over more units of output” (Wikipedia). This economic concept dates back to the writings of Smith (1776) with the idea of obtaining larger production returns through the use of division of labour.
‘Wisdom of the Crowd’
163
This can be seen among scientific publishers – the few large international commercial publishers are leveraging their size and offering to provide services to smaller publishers. The latter may lack the resources or commitment to invest funds in the increasing amount of new technology required to underpin developments in electronic publishing, or provide an international marketing and distribution service. Larger publishers increase their economies of scale by including the products of smaller publishers within their catalogues or portfolios. It is a win-win situation – the larger publisher gets a bigger market share, greater economies of scale, and the smaller publishers get to focus on their other primary and unique activities. The consequence is that there is market dominance by a few large commercial and society publishers, and as technology, globalisation and social change all interact, their dominance over the current publishing scene increases. This dominance does not bode well for UKWs who are not beneficiaries of the business model that commercial publishers currently pursue. There had been evidence of a spate of merger and acquisition (M&A) activity among commercial publishers in the 1980’s and 1990’s as an additional attempt was made to reap economies of scale. However, there are other dynamics at play. The ‘twigging phenomenon’ (see earlier) supports the creation of new sub-disciplines, which in turn lead to new communities each creating a journal so that quality publications can be shared. There is a constant emergence of new journal titles as science grows and splinters into new directions. This means that although there are advantages from being a large publisher, and benefitting from economies of scale, there are also market mechanisms in place to create new journals from small, new publishers. This raises another social dynamic – the ‘wisdom’ which is inherent ‘in the crowd or masses.’ And this wisdom is not to be ignored in the development of new scientific communication systems, particularly one where quality control (the refereeing system) is at the heart of the system.
‘Wisdom of the Crowd’ ‘Wisdom of the Crowd’ has been held as a challenge to elitism – where it has been shown that the sum experience/knowledge of a large number of people exceed the performance and results of a few skilled experts in certain instances. As a social concept it is a move towards the principle of ‘democratisation’. Underpinning democracy runs counter to the closed and elitist structures of blind refereeing which is behind traditional scientific journal publishing.
164 Chapter 14
Underlying Sociological Developments
‘Wisdom of the Crowd’ takes the position that citizens contribute to a common end from the perspective of their own personal interests and background experience. In his book, James Surowiecki (Surowiecki, 2004) suggests that wisdom comes from asking a very wide group of people their opinion on a specific issue, and in many cases the combined results of the group’s opinion turns out to be superior to relying on the views of a few experts. Surowiecki’s theory is based on a practical observation. In 1906, Francis Galton, from Plymouth, U.K., saw bets placed on the weight of an ox at a local country fair, and though he had little faith in typical man’s intelligence, he found that the average assessment from the 800 or so participants was almost accurate. The conclusion from this and many similar experiments was that groups do not have to be comprised of clever people to be smart. This book suggests that one should stop chasing an expert and instead ask the crowd. Consensus among the masses is achieved by virtue of a mathematical truism – if enough people participate, the individual errors, positive and negative, cancel themselves out. This consensus gets stronger the greater the number in the ‘crowd’. Perhaps the above is not a true reflection of the scientific communication process but it is nonetheless indicative of there being a social mechanism at work which supports the idea of democratisation being just as effective as elitism. The underlying claim is that the value of individual expertise is in many respects overrated (or spectacularly narrow). Expertise is only relevant in narrowly defined activities. Experts are more likely to disagree than to agree. There are many instances of elitist refereeing system getting things badly wrong – bad research being applauded, and important research marginalised – though it should be emphasised that these are untypical. In part such failings can be attributed to the current global community of referees being overworked (Ware, 2005). Ware highlights that a few referees are responsible for a major share of articles reviewed. Blurring around the edges of the formal, peer-review based, scientific communication process is the new online media challenge. Traditional publishing system is competing in terms of speed, interactivity and effectiveness in supporting widespread collaboration with informal online services. Blogs and wikis are examples of the new grass roots of information creation, revelling in the democracy which the Internet has brought about. The wisdom of the crowd reaches the bedrock on which social collaboration and social networking – emerging processes in the electronic publishing sector – are rooted. Wikipedia is an example of a product which has been built up using the wisdom of the crowd in a structured way. One specific outcome from this is that there are web sites which ignore the ‘expert’ and use mass participation by the net generation to create their own
Cult of the Amateur
165
content. As an example, a service on the web known as RottenTomatoes uses the wisdom of the crowd to rate films and movies. Tripadvisor is another for those using hotels and travel services. It allows everyone to be a critic, not just those who are professionally trained. There are similar examples in many other walks of life. Even Google’s core search system is based on wisdom of the crowds – the PageRank algorithm is based on actions by the crowd. Amazon, e-Bay and similar online services look to the wisdom of the crowds to sustain their own approaches to information services. For ‘crowd’ one could substitute ‘Unaffiliated Knowledge Workers’. Many of them exist, they are not all experts, but in using the sum of all their experiences they are as likely as not to come up with as accurate predictions equal to if not better than the current double blind refereeing system in scholarly publishing.
Cult of the Amateur However, there is also a countervailing argument. Andrew Keen, in his book “The Cult of the Amateur” (Keen, 2007) laments on the growing democratisation within media and suggests that it is destroying something valuable within society – quality and expertise. He claims that empowering the amateur, a consequence of extending power to the ‘the crowd’, undermines the authority of experts who have spent years and decades building up a personal knowledge base. Adopting the wisdom of the crowd principle would give experts the same status as an ignorant bystander. Lewis Mumford called the situation “a state of intellectual enervation and depletion hardly to be distinguished from massive ignorance” (Mumford, 1974). Keen claims it is necessary for experts across a variety of fields to be able to sift through what is important and what is not, what is worth spending time on as opposed to the white noise that can be safely ignored. The victims of the Internet could be those experts who currently provide the sifting process, and the threat according to Keen is that their role is diminished. We are left to make our own way through the mass of available data without gatekeepers being there to provide support and assistance. This is a key plank of the STM publishing industry. It provides a service which galvanises the experts to make assessments on what is important and what is not, which for the end user is a valuable service and one for which he/ she is expected to pay. Refereeing is a service which needs to be organised – it is not something which arises from the Internet automatically, and nor is it free to organise.
166 Chapter 14
Underlying Sociological Developments
However, though there are aspects of the ‘cult of the amateur’ which are important, the STM communication industry is not static – it is being driven by many forces (‘the perfect storm’) which change much in the environment, including the way relevance is created and targeted to those who most need it. Discovery tools are becoming more ubiquitous, innovative and personalised. Refereeing can also become more open and transparent. The existing STM book and journal is part of the traditional system where the ‘cult of the amateur’ would not be appropriate. It needs the expert to make quality selection for users. But, under the emerging paradigm, there are many other ways whereby good information can be made available to those who want it (see chapter 17 on Future Communication Trends). Though the challenge set by Keen is a fair one, it is one which will be eroded as and when demographics provides a more enlightened audience and the ‘cult of amateurs’ become informed and an extensive network of ‘amateur scientists’.
‘Collective Intelligence’ A more recent and supportive treatment of the ‘wisdom of the crowd’ concept can be found in a book by Michael Nielsen (Neilsen, 2011). He comments on the success which has been achieved when a network of individuals participate in achieving a result which would be beyond the competences of any one particular expert. Collective intelligence can be brought to bear on (a) cognitive problems (where there is a definitive solution), (b) coordination problems among diverse groups, and (c) co-operation problems that involve distrustful groups (Neilsen, 2011). Diversity and independence are also important. Groups work well under these circumstances and less well under others. People must make individual independent guesses for collective intelligence to succeed. Nielsen described three examples. One is the case of an eminent Cambridge University mathematician (Sir Tim Gowers), holder of the equivalent of a Nobel prize in mathematics, who used his blog to raise a mathematical conundrum which he himself was unable to solve. Replying to an open invitation, a global outreach of respondents offered their own opinions on the problem raised by Gowers and a credible answer was arrived at. This is known as the Polymath Project. It was a success as a communication device – many amateurs and experts interacted in solving the problem online. Nielsen pointed to the Galaxy Zoo and Wikipedia projects as being other instances where the mass of individual participation has achieved much more than any few experts or computers could achieve on their own.
Designed serendipity
167
Neilsen also described the challenge which Garry Kasparov, the world chess champion, played ‘the world’ at chess in 1999, and only just won. Some 50,000 people from 75 countries participated. The closeness of the match, despite the disparity in the chess skills of those competing against the world champion, lay in the ability to marshal the aggregate of skills in particular areas where one individual was particularly good, and this aggregation from a network of participants was almost as good as the expertise embodied in the world champion. Individually the players would have been no match. This social phenomenon has implications on a number of aspects of scientific communication. In particular, it challenges the closed shop of refereeing whereby one or two experts sit in judgement on the quality of a research report when an open assessment sits better with the knowledge embedded within a wider network. It supports the progress towards greater ‘democratisation’, a greater participation in the answer to research questions, and to use of specialist publications. We are still in the early days of understanding collective intelligence. It is ironic that the tools being used to share collective intelligence – blogs, wikis, online forums – were not invented by the scientific research community but rather by outsiders such as Matt Mullenweg, a 19 year old student who created Wordpress, Larry Page and Sergey Brin, two postgraduates who created Google, Linus Torvalds, the 21 year old student who created the open source Linux operating system and Mark Zuckerberg, a Harvard student who developed FaceBook. We do not have to rely on the collective wisdom of the professional to move collective intelligence gathering forward – it can arise from within the ranks of knowledge workers outside the mainstream publishing industry. What this suggests is that an open, more democratic information society is more able to embrace different needs of individuals who were locked out of the old system. It brings in different social groupings which are to be found in ‘the long tail’ of the information industry – the disenfranchised or Unaffiliated Knowledge Workers. Their needs have been largely ignored as society has become more specialised, but there are now signs of an awakening to the needs of this large social group.
Designed serendipity A number of collaborative projects – including InnoCentive (see later in chapter 21 on SMEs) – harness the micro-expertises of many different individuals who would not normally come together. This collaboration was originally defined
168 Chapter 14
Underlying Sociological Developments
by Jon Udell as ‘Designed Serendipity’ and adapted by Neilsen in his book “Reinventing Discovery: The New Era of Networked Science” (Neilsen, 2011). Designed serendipity is the process whereby the many intractable problems facing a scientist – from large to small – are unlocked by the process of finding the right expert at the right time to help. That person can be anywhere in the world. Collaboration is key to designed serendipity. It is claimed that when we try to resolve a problem on one’s own, most of the ideas go nowhere. But when many people are stimulated to address the problem the interaction increases through the ‘network effect’. It happens when the number and diversity of participants increases. The greater this happens the more the chances are of finding a way through the problem. The problem solving goes ‘critical’. “Once the system goes critical the collaborative process becomes self-sustaining. That jump qualitatively changes how we solve problems, taking us to a new and higher level” (Neilsen, 2011).
Everything is Miscellaneous In his book ‘Everything is Miscellaneous – the power of the new social order’ David Weinberger, a marketing consultant, and fellow of Harvard Law School, goes further and extols the virtue of redundancy in a digital age (Weinberger, 2007). Metadata enables access to objects in different ways. Digitising everything enables information to take on different forms. Whereas, in the printed world, the ‘leaf can only be on the one branch at any one time,’ in a digital world this is no longer true. Metadata provides the link to objects in multiple ways. ‘Messiness’ now becomes a virtue. Unacceptable in a print world, but in a digital world entirely possible as a variety of novel links can be followed in unstructured ways to the required object. He claims that in a print world there was a finite resource to transmit information and hence knowledge. In the new digital world there is no such limit. “Miscellanised information is information without borders,” according to Weinberger. Anyone can participate in the information process. This does create noise, but it is noise which finds its own level, selection (and community). One key development helps give value within unstructured miscellany. The compilation of metadata should not be confined to the rigid structures of traditional classification/cataloguing. It does not have to be so structured to defy creation or use by a non-professional. Rather metadata should be created, enhanced, built on by the community itself for different purposes, in a sort of Wikipedia style of ongoing self-improvement.
Grown up Digital
169
Authority therefore is again coming under attack. The owners of the stuff used to be the sole organisers of the publications they produced. In the online world, it is now the end user who is empowered. Sifting is now possible through polling and rating systems (Digg, Twitter, Amazon, Google, etc.) rather than through reliance on a rigid refereeing system. In the pre-Internet days supply created demand; in the post Internet it is demand which creates supply (Shirky, 2010). In effect, Weinberger makes the claim that the elaborate classification of information will be made redundant by what he refers to as the ‘third order’. The ‘third order’ is social networking/social collaboration through Web 2.0. His arguments rest on the concept of the ‘Wisdom of the Crowd’ (Surowiecki, 2004) and the power of metadata fragmentation (Evans & Wurster, 2000). All result in a greater variety of digital information and communication formats (‘miscellaneous’) and the decline of the expert (classification). The underlying theme behind Weinberger’s book is that there is a pool of latent participants in the information process waiting to join in, and new technology now allows them to do this. He cites other commentators in supporting his argument that a social revolution is taking place in the way we interact and communicate online. In some areas it is more apparent (physics, for example), than others and he makes no comment on those subject areas which rely on a corpus of carefully vetted and structured information which advance the cause of science in their own areas (such as biomedicine). ‘Standing on the shoulders of giants’ has little room in Weinberger’s analysis. On the other hand, he points to greater democracy of scientific interaction and the widening of the community which can engage in STM’s creation and use. As such he is a key supporter of the general principle being advocated in this book – that there is a big market out there in the digital world waiting to participate – the knowledge workers not attached to the main research centres.
Grown up Digital Another advocate for the change in the information paradigm is Don Tapscott. Tapscott’s ‘Grown up Digital’ (Tapscott, 2008), rests heavily on evidence he collected from the new generation of information users through his research company, nGenera. Tapscott highlights the generation gap. NetGeners are defined as a distinct, new and powerful social group with attributes different from the older Baby Boomers. He claims the new NetGeners exhibit a strong social conscience. They are more participative within society, more collaborative, and
170 Chapter 14
Underlying Sociological Developments
supportive of greater openness than the earlier generations. They make considerable use of the new Internet communication tools available to them to effect this. NetGeners have a different mindset, created to some extent through their early exposure to IT, interactive gaming, and the Internet which gives them a different set of skills. These skills are just as relevant as the old linear skills learnt by the Boomer generation, in fact are more appropriate for the opportunities which the Internet offers. In the old days, the Boomer and generation X periods, the focus was on collecting ‘eyeballs’, on creating site stickiness, but overall it was using HTML presentation platforms to broadcast to the audiences. The big change came with XML which allows collaboration and interactivity in creating communities with like interests. “The old Web is something you surfed for content. The new Web is a communications medium that enables people to create their own content….”. This shift from one-way broadcast media to interactive media has had a profound effect. The distinction between bottom-up and top-down organisational structure is at the heart of the new generation, with the NetGeners relating more closely with the bottom-up approach. Technology has become all pervasive, but whilst the Net Generation assimilated technology because they grew up with it, the Boomers/generation X had to accommodate to it. This explains why, within family circles, it is the young that often take leadership over ICT issues in which they are more confident. Tapscott explored eight characteristics which differentiate the Net Generation from the earlier (Baby) Boomers. These include: 1. Freedom is prized 2. Customisation of things for their own specific needs becomes important 3. Collaboration, not Diktats from above, to produce new extended relationships prevails 4. Traditional organisational structures and procedures are scrutinised more intensely 5. Integrity and openness is demanded as is transparency in all things 6. They want to have fun, be entertained and play 7. Speed is a prerequisite 8. Innovation is an essential part of life. They influence each other through so-called ‘N-fluence Networks’ – online networks of NetGeners who, among other things, discuss brands, companies, products and services. They do this through creating content. This can be in the form of blogs, wikis or in other such online combinations. Some 40% of American teens and young adults have their own blogs, according to the Pew Research
Here Comes Everybody
171
Center. In this way “they are democratising the creation of content, and this new paradigm of communication will have a revolutionary impact on everything it touches…” It does suggest that the writing is on the wall for broadcasting services – TV as well as newspapers, and possibly parts of the scientific communication process. It also suggests a new working relationship with social institutions. There are suggestions that the NetGeners wish to take active part in creating new products and services which match their customised and personalised needs. Where barriers are put in place to limit such collaboration the NetGeners use the social networks to highlight their concerns. This activity was first identified by Alvin Toffler in “Future Shock”, in which he referred to this as the ‘prosumer’ (Toffler, 1970). Tapscott extends this to prosumption – the interaction of consumption with production to influence the creation of useful products and services. It is a behaviour-led rather than purely technological change which Tapscott describes. This again suggests that UKWs have a growing role to play in the creation of scientific information in future as they become more digital.
Here Comes Everybody In a related vein, in 2008 Clay Shirky published a book entitled ‘Here Comes Everybody’ with the subtitle ‘How change happens when people come together’ (Shirky, 2008). Shirky challenges one of the basic tenets of the traditional scientific publishing processes – its fixed institutional governance and reliance on an elitist section of the community. In his book Shirky claims that most barriers to group action have collapsed, and without these barriers knowledge workers become free to explore new ways to create and disseminate information. As part of this breakdown in barriers, most of the relative advantages which large established organisations have had have now disappeared. In its place, sharing and collaboration becomes the anchor for the community. Forming and sustaining groups of people is complex and institutions reflect these difficulties. New kinds of group sharing are emerging built on the increasing complexity of networks among individuals. Philip Anderson (Anderson P, 1972) claimed that “more is different”. Complex behaviour is greater than the sum of the constituent parts. The network of shared interests is both extensive and intensive in nature. In traditional companies and institutions, the hierarchical nature of the organisational structure prevents the free-flow of information. Institutions arose, according to Coase (Coase, 1937), to enable transactional and contractual costs of
172 Chapter 14
Underlying Sociological Developments
doing business to be optimised. Such transaction costs are now falling. It is true that small decreases in costs can be absorbed by existing institutions – however, large changes in transaction costs require new systems. There is no payoff for large institutions. They protect the status quo – they have inbuilt conservatism, a natural fear of change which will destroy their established structures. They may have difficulty transporting themselves across the ‘valley of death’ (see chapter 7). New facilities have emerged to support the creation and extension of such social networks. Flickr provided the means to share (and tag) photographs. No oversight or central controls were involved, just self-regularisation by the participants. According to Shirky, the rise of group activity is threatening the established order, and this can be applied to the scientific publishing industry. Group activity includes – (a) Sharing (b) Cooperation, and (c) Collective Action. All three of these are becoming cheaper. Examples of sharing include social media sites. Cooperation requires change in behaviour, including greater conversation (using new social tools). Collective Action requires commitment. A similar view has been expressed by Weinberger (Weinberger, 2012) who stressed that knowledge creation was now the work of a wider network and not confined to a few superstars. The final product of networked science is not knowledge embodied in self-standing publications…… It is the network itself – the seamless connection of scientists, data, methodologies, hypotheses, theories, facts, speculations, instrument readings, ambitions, controversies, schools of thought, textbooks, faculties, collaborations and disagreements…. (Shirky, 2008)
New services enable cross dataset and cross media searching in a meaningful way, a great leap forward over traditional research article publication and readings. Including more people within the social group supports the ‘Network Effect’ (Metcalfe’s Law). When you double the size of the network you quadruple the number of potential connections. David Reed (an early Internet pioneer) went further to suggest that the value of group-forming networks grows exponentially with the number of users. The value from Reed’s Law growing faster than Metcalfe’s Law is that the former addresses the fact that there are many more links to other networks, not just individuals, as potential sources for individual researchers. In effect, Shirky highlights the power and influence of grassroot collaborations and networks as the building blocks of a new democratised information society. Within this he sees a wider spread of engagement from individuals who
Cognitive Surplus
173
are currently outside the formal institutional settings. New technology gives everyone (hence the title ‘here comes everyone’) the ability to communicate more effectively with kindred spirits and interest groupings. The challenge this poses to a closed journal publishing system – highly structured and hierarchical – is evident. It is also evident that this open communication embraces knowledge workers who become enfranchised within the communication system.
Cognitive Surplus As a result of improved productivity, an increasing amount of personal surplus time is available to those in western societies (Shirky, 2010). This creates the asset base from which improved participation in social media can take place. As illustrations, he looked at the importance of drinking gin as a palliative during the seventeenth and eighteenth centuries. It enabled mainly the poorest members of society to lose themselves in alcoholic stupors as a way of using up their surplus time and coping with difficult lives. More recent version of this is the focus on television as an escape mechanism by the masses, as a way of making themselves feel less isolated. Perhaps controversial, but the point is made that there is more time available to do non-work related things than there was in the past. They are instances of using up surplus time. What has happened is that the spaces which did not exist in the twentieth century are being filled rapidly by the new experience of not just being a consumer (of broadcast material) but also becoming an active participant. This is new and has still not reached its peak. The web exposes us to others whose interests match one’s own. Discovery costs of finding those with like interests are now low. Participants in various social networks do not abide by strict copyright rules, but instead adopt their own version of what is acceptable – and that excludes profiteering from other’s sociallycreated work. They do not want to inhabit a world of commerce but rather seek affirmation and recognition from peers, friends, colleagues, acquaintances, etc. “Within the community purity of motivation inside the community matters more than legality of action outside it. Intrinsic motivations take precedence over extrinsic motives” (Shirky, 2010). It is claimed that TV involves over one trillion hours of free time being absorbed. If only 1% of this were to migrate to a sharing and collaborative platform, this would mean a radical change in the structure of the information/ entertainment industry. The two main stimuli are one trillion of free time – the other is the emergence of public information exchange as an enhanced social function using social media.
174 Chapter 14
Underlying Sociological Developments
The definition of ‘media’ has changed over the years. Formerly it was oneway broadcasting; then it became two way; and now it obscures the boundaries between what was formerly private communication with public information. By the same token, economics has also changed – since Gutenberg the infrastructure was usually owned by the content providers. Now the infrastructure (platform) is ubiquitous and empowers the content creator. Increasing ‘free time’ (surplus) is not a new, digital phenomenon. What is new is the ability to share content and the variety of means available for doing this within the surplus time now available. And this leads on to participation in the communication process by a far wider group than has hitherto been the case in the print world. It enables knowledge workers to share their thoughts and experiences with the hardcore academic/researchers on a collegiate basis.
The Fast Food Generation According to Professor David Nicholas, CEO of the CIBER research group, we are facing a ‘fast food’ generation. We expect immediate gratification from access to information. We have become ‘promiscuous users’ in the sense that we only want to read 1–3 pages of text at any one time. Short articles are viewed more often than longer ones. The average reading time per article is about 3 minutes. Anything more and the attention span wanes (see ‘Is Google making us stupid?’ Carr, 2008; 2010). Abstracts, and possibly synopses, are back in fashion. In accessing digital resources, the new end user flits between data sites, in some cases without any pre-knowledge of what he/she may find. Some sites find that over half the users do not come back. When information of relevance is found, a ‘squirreling’ behaviour is adopted. Articles are printed out, put away, and very likely never read again. iPads are also revolutionising the way we read. The links which can be followed have exploded the sources which can be accessed easily in exploring a given topic. Linkages is a challenge to the image which publishers have sought to develop and protect – through following links it matters little where the source resides as long as it appears high on the ranking list provided by the author or search engine. A further revolution is in development, again a reflection of behaviour patterns traceable to mass consumerism. Smartphones allow end users to access material on the move, at any time, without the need to carry bulky IT equipment. The smartphone, such as Apple iPhone or Samsung Galaxy, is another paradigm shift about to happen. A ‘lite’ interface is also seen as a boon to the busy ends user.
Natural Group Size
175
This has an effect on how the traditional players in the scientific communication sector are being perceived, not only by the researchers based in academia but also the new market sectors which are beginning to recognise that there is valuable information which they seek and which is now accessible through mass consumer platforms. It opens up a new world – new challenges – for the scientific communications industry. The information needs of trained and educated knowledge workers are about to be embraced through developments which can be traced back to the rise of consumerism in general.
Natural Group Size Robin Dunbar, an Oxford University anthropologist, has suggested that the ‘natural size of the group’ is about 150 (Dunbar, 1992). That is the number of people with whom most humans can maintain a stable relationship. These are relationships in which an individual knows who each person is in their contact network, and how each person relates to every other person. Dunbar theorised that “this limit is a direct function of relative neocortex size, and that this in turn limits group size … the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained” (Dunbar, 1992). The Dunbar number appears to be the same size as the number of people in a typical pre-industrial village, a professional army unit, the Roman army’s centurians, a Hutterite farming community, and a scientific sub-speciality. However, social networks, and services such as FaceBook, have created a new form of social bonding which replaces the traditional concept of the natural group size. FaceBook, LinkedIn, WhatsApp, etc., enable many thousands of people to group together in a communication network without creating tensions of pressures on the system or the people involved. The NetGeners use communication networks which are orders of magnitude larger, far more complex and much more efficient than the networks which were possible for older generations. This extension beyond the Dunbar figure, leading to hundreds of people being part of any one researcher’s social network, enables researchers outside the traditional world of a closed knit academic audience to be reached. Unaffiliated Knowledge Workers can more easily be brought within the new capacities of the technology-enhanced neocortex. One of the most intriguing aspects of ‘N-Fluence networks’ or groups is the capacity to build trusted relationships with people outside one’s traditional social circles. As our social networks grow so does the universe of people we rely on, and can call upon, to help make decisions. The mechanism which is used to build trust is new within the digital age.
176 Chapter 14
Underlying Sociological Developments
Trust Trust in a publication system is essential to attract and retain users. The print publication system has evolved over generations, and has become a cornerstone for dissemination of research results. It is only in the past decade or so that a new alternative has arisen with a more digital construct and audio-visual interface. Does the wider population, those who might have interest in getting easier access to scientific publications, have trust in the existing system or do they rely on other factors to provide trust? The so-called ‘digital native’ has emerged, immersed in the new IT technology that surrounds us. Instead of being brought up on a menu heavily laced with traditional print, the new generation has personal computers, television, iPods, MP3s, smartphones, online games and socially interactive software with which to interact. This has an impact on trust. As has been described by Kieron O’Hara (2004), trust is different in the scientific communication area compared with the world of the Internet. He felt that there was a divide – publishers versus Web 2.0 – where neither side really understood the challenges and issues facing the other. With the Web, there is currently no way for people to authenticate what is on the Web and to assess its quality. We have to take it on trust – there is a fear that the ‘unwashed masses’ could break the system. Either the Web 2.0 will be regulated or it will collapse (‘tragedy of the commons’). For generations there has been trust that the findings published in international scientific books and journals were reliable, accurate and could be built on to push forward the boundaries of science. They were a solid foundation. The publications had been sifted through the refereeing system to ensure that only those which withstood the test of quality would be published and disseminated. On the whole the scientific community believed in the publication system. They participated in it as willing editors, referees, authors and readers. The community ensured that time would not be wasted checking each nugget of information and whether it was correct. With the Web there has been insufficient time for the scientific community to transfer their trust from the traditional scientific system to the world of the Internet and Web. There is no stamp of approval by institutions in which the individual scholar has confidence. Much of the information on the Web, particularly in social networking, is uncorroborated. Trust is crucial to new forms of publication but the metrics of trust are illdefined. There are some highly trusted specialists whose web sites one can rely on. Formations of trust syndicates are evolving and being protected by security
CIBER/Tennessee Study
177
and authentication controls. Within Web 1.0 there have been several ways to overcome concerns about Trust: – Ebay has developed a Trust metric based on feedback from buyers and sellers. Buyers and sellers try to make sure that they remain trusted participants by sticking to the rules of transaction. Failure to do so appears on their e-Bay record. – Amazon uses Reviews sent in by book purchasers to provide more information about a title without resorting to advertising hype. – Slashdot uses Karma (or the Wisdom of the Crowd). – Google pioneered the PageRank system which uses a proprietary trust metric based around numbers of links to an item. It would be necessary to set up a mechanism to provide quality assessments for the Web. At present these exist more in the breach than the observance. But without trust in both the Internet and the Scientific areas moving together we face a mismatch. As such, electronic publishing in support of the potential information needs of the wider community will never gain the benefits which the Internet infrastructure potentially allows.
CIBER/Tennessee Study Professor Carol Tenopir (University of Tennessee) was involved in a Sloan Foundation supported study which looked at trust and authority facing scholarly resources. The study, which also included CIBER Research Ltd, ran from September 2012 to November 2013. It capitalised on Tenopir and King’s research on scientific user behaviour going back thirty years (Nicholas et al., 2014). Tenopir makes the distinction between information usage and information publication. The motives are different though the individuals involved may be the same. Usage is stimulated by the need to know what is currently available and relevant; publication is to meet demands for future funding, tenure, recognition, etc. The latter requires a quality product created through a well established and formal refereeing system. The former relies on the most appropriate and timely way of getting access to research results and can be more informal in approach. The distinction between use and publication raises the question of the relevance of blind refereeing in a digital age. It questions whether trust is wholeheartedly in support of a formal refereeing system which includes such idiosyncrasies as coercive citations by editors, ‘self-citation’ by authors, the halo effect,
178 Chapter 14
Underlying Sociological Developments
etc., submitted manuscripts, and the need to project a personal face rather than delivering a public independent service. Trust, according to Tenopir, highlights the greater ‘connectiveness’ which trustworthy publications generate. Isolation – the lack of interaction with the publication – is anathema to a trusted publication. The range of other features which help to enhance an item’s trustworthiness include whether the item has been referred to by colleagues; by familiarity with the author; by whether peer review has been carried out, and whether – as a crude assessment – there is a high impact factor for the journal carrying the article. However, the impact factor assessment appears more of an issue which is important for Publishing purposes rather than as a source for Usage. And it is also apparent that – contrary to expectations from the millennium generation – that it is the younger scientists who are locked into the metrics of citations as an indicator for publishing support. The description of the aims of the University of Tennessee/CIBER joint study was ‘the study on how researchers assign and calibrate authority and trustworthiness to the sources and channels they choose to use, cite, and publish in’ (Nicholas et al., 2014). The research involved three stages – focus groups (15), interviews (87), and an overall survey. As at September 2013 the following results were disclosed. One surprise to the project team was the lack of serious engagement by most of the contacts, even among social scientists and younger researchers, on the issue of trust: – Social media as a source of information was not trusted though some social scientists found it a source of ideas. However all the contacts knew people who blogged and tweeted and there was a recognised role in outreach for social media. – The summary analysis of reasons for trusting a citation is listed below according to the number of responses received: – The author was known to the researcher – The journal or conference was known to the researcher – The reference was a seminal work in the field – The reference supported the methodology – The research group/institution was known to the researcher. As UKWs are not subject to the distinction between Usage and Publication in their assessment of a trusted information source, it is expected that their profile of trusted sources would be less inclined to prioritise formal (journal article) formats of publication, but instead accept that their particular circumstances may require various speedy and therefore informal (social media) sources. This would fit in with the fact that the newer platforms allow for time switching,
Confluence of Trends
179
support many mobile access systems, and have a greater preference for delivering free or inexpensive search results. In effect, too much cannot be read into this study from early results, but it does seem that Trust in the existing, print-based publishing system is not a key driver in determining the direction which scientific communication is likely to take in the next few years – either in support of UKWs or to further extend on their lockout.
Confluence of Trends The evidence is that there are a number of trends which are resulting in a revolution in the way scientific information is being handled. These trends will be examined in subsequent chapters. They include: – Open Access – Citizen Science – Science Blogs – Format changes in electronic publishing – Alternative publishing systems – The Internet and Web 2 – Cooperation and collaboration. These are changing the relationship between Science and Society. The traditional publication system enabled the inventor of an idea to be rewarded and recognised through the publication of a formal article, refereed by peers. Journals have evolved over the past three and a half centuries to enable the articles to be brought together in relevant packages. This system has blossomed into a rich body of shared knowledge, a collective long-term memory that is the basis for human progress. This system for sharing knowledge has worked extremely well, and has changed only slowly. But now significant social and technical changes are being seen, which will potentially alter the way scientific knowledge is created and disseminated. And these changes will have significant impact on the ability of the knowledge workers in a broader sense to become beneficiaries of the output of the global research effort. The rise of social media is a particularly important development which could change the way scientific results will be communicated among a wider audience in the near future.
Chapter 15 Social Media and Social Networking Introduction As reflected in the concepts described in the previous chapter, there are important changes taking place in communication patterns among scientists, based as much on social concepts as by changes in technology. Social change is helping to alter the paradigm for scientific publishing. This chapter looks at what social media are available, how researchers have adapted to them, and how UKWs may benefit as social media becomes more entrenched within the research cycle.
Social Media and Social Networking It has been suggested in an earlier chapter (chapter 6) that there is a tsunami waiting to break over the traditional print-based publication system. With one in six of the world’s population having a FaceBook account; with Google dominating access procedures; with blogs and forums providing a focus for lively exchange among like-minded individuals; with Twitter providing immediate communication channels for fast-breaking news; with blogs providing personal assessments of trend and issues, the challenge for adapting research articles to cope with a new technological/social environment is real. It is frequently claimed that young people innovate in ways that were previously inconceivable. Students and professionals worldwide can join many networks, ranging from the personal interest groups, such as moderated bulletin boards, to corporate research networks, such as InnoCentive, or forge new relationships. New generations participate in Wikipedia, disseminating knowledge freely for everyone. However, when it comes to the actual process of formal STM publishing, it all gets very passive and traditional. Have research authors taken on board the tools available through social media and applied them to their own research procedures? A study undertaken by CIBER on behalf of the Charleston Observatory (CIBER, 2011) assessed this question. In analysing 2,414 responses to an online questionnaire sent to authors of research reports who – a priori – were assumed to have some interest in or involvement with social media, the results suggested that there remains a long way to go. Adaptation by the scientific research community seemed at best peripheral in 2011. In the CIBER study, seven areas of the research life cycle were defined, and social tools which corresponded with each of these were identified. All
The X,Y,Z Generations
181
respondents rated their use of social media, with the most popular social media support being for ‘collaborative authoring’. The next most popular use was in ‘conferencing.’ The areas of social media in descending order of popularity were: – collaborative authoring. Just over 50% used this process. – scheduling was the next, followed by – communicating through social networks, – image sharing, – blogging, – micro-blogging and finally – social tagging. The majority of users – and these were self-selected as being more likely to be at the forefront of social media adoption – only made use of one or two of the above processes. The greatest correlations occurred between those who used blogs, microblogs and social tagging. It appears that the aims of the social tools – to enhance speed and extend the global communication of results – are not yet synched in with the main work processes of the author community, the goals of the latter being to improve research output, disseminate the results and enhance personal esteem. Without some form of compulsion or mandating, the application of social tools to the scientific publishing process remains experimental and marginal.
The X,Y,Z Generations There are a number of ways by which individuals in society have been categorised on their information usage, often using ‘generational differences’ as the basis. Tab. 15.1 summarises several of the main classifications: Tab. .: The Generations – an overview Social sector
Pre Boomers Boomer Generation Generation X Net Generation Next Generation
% of current population % % % % %
Born within years (Up to ) (–) (–) (–) (–present)
Related definitions
Baby boom generation Baby bust generation Milleniums or Generation Y Generation Z
According to ‘The Researchers of Tomorrow’ project, a study commissioned by Jisc and the British Library, research students need more face-to-face and informal
182 Chapter 15
Social Media and Social Networking
support tailored to their own subject area to help them fully embrace open web technologies and social media (Jisc/BL, 2012). The survey covered 17,000 doctoral students over a three year period and assessed research behaviour of a subgroup of the Generation Y students born between 1983 and 1992 (which differs slightly from the definition applied above). The study looked at research students’ use of social media applications within the research setting, and found that, over the three-year period, there has been only a gradual increase in use of the social web and social media. For example, 23% of all the students had made passive use of online forums, but only 13% had taken an active part in any on line discussions: 23% followed blogs, but only 9% published a blog themselves. Active take-up of institutionally-provided open web resources is low. The students requested more information about technologies and applications such as Google Scholar, cloud computing, EndNote and Mendeley. The implication being that these applications were not included within their mainstream activity. Other findings from the report suggest that doctoral students’ understanding of the intellectual property (IP) and copyright conditions appears to be confused, rather than an enabler for innovation. The study also highlighted a marked dependency on published secondary sources rather than primary sources as the basis of students’ own original research, regardless of discipline. In essence this Jisc/BL study indicated that there is a complex situation with no clear impression emerging of how social media is impacting on a particular part of the generation of researchers.
Social Media in Academia Two small scale comparisons of the use of social media by researchers in the U.K. and U.S.A. are summarised in Tab. 15.2. Tab. .: Use of Social Media Media Videos Blogs Use comments Podcasts RSS feeds Twitter Numbers Source: Nicholas, b
U.K. researchers
U.S. researchers
% % % % % % ,
% % % % % %
Social Media in Academia
183
There was frequent usage of social media by researchers but mainly for occasional use. The low usage is partly a result of there being no accepted standards for judging quality on the various social media services. Also, the creation of information to fill the various social media services is constrained by the fact that there is no reward to be had from having information ‘published’ in such non-traditional formats. Nevertheless, the pressure is on and ‘informal’ communication services are eroding into the overall scholarly communication system as illustrated in Fig. 15.1. INFORMAL logs Wikis Bulletin boards Grey literature Social media Conference Proceedings Journals
Book series Research Articles
FORMAL
Fig. 15.1: Impact of informal communication on traditional STM publishing market
Tab. 15.3 gives a breakdown of USA users of online information systems, derived from Pew Research in the U.S.A. Tab. .: Demographics of Social Media use. By Sex Male Female
By Age % %
– years – years – years plus years
% % % % (continued)
184 Chapter 15
Social Media and Social Networking
Tab. 15.3: (continued) By Sex
By Age
By Race: White AfroAmerican Hispanic By Educational Level: High school and less Some college College degree
By Population density: Urban Suburban Rural By Annual Household income: $k and under $k to $k $k to $k $k and above
% % % % % %
% % % % % % %
Source: Pew Research, . The Demographics of Social Media Users, February , . Maeve Duggan and Joanna Brenner, Pew Research Center’s Internet & American Life Project, www.pewresearch.org,
According to Pew Research Center’s Internet update on social media (Duggan, 2013), 73% of online adults now use a social networking site in the U.S.A. FaceBook is the dominant platform in terms of the number of users, but a striking number of users are now diversifying onto other platforms. 42% of online adults Social media sites, 2012–2013 % of online adults who use the following social media websites, by year 67
71
2012 2013
20
Facebook
22
21 15
Linkedin
Pinterest
16
18
Twitter
17 13
Instagram
Pew Research Center’s Internet Project Tracking Surveys, 2012–2013. 2013 data collected August 07–September 16, 2013. N = 1,445 internet users ages 18+. Interviews were conducted in English and Spanish and on landline and cell phones. The margin of error for results based on all internet users is +/– 2.9 percentage points.
PEW RESEARCH CENTER Fig. 15.2: Social Media sites 2012/13
Numbers of Users of Social Media
185
now use multiple social networking sites. In addition, Instagram users are nearly as likely as FaceBook users to check in to the site on a daily basis. Fig. 15.2 illustrates the relative numbers of users of the key social media sites. Google continues to dominate the list of most used search engines – 83% of the US search users use Google. The next most cited search engine is Yahoo, mentioned by just 6% of search users. When Pew last asked this question in 2004, the gap between Google and Yahoo was much narrower, with 47% of search users saying Google was their engine of choice and 26% citing Yahoo. Social media has become ubiquitous within society, and it is only a matter of time before the attitudes and energy which bring social media services into being will also be included within scientific communications. Several of the main social networking sites – and an indication of the sheer numbers of members – are listed below.
Numbers of Users of Social Media FaceBook 1,440 million active members (June 2015). In 2012, 149 million users were within the US. Fastest growth was in South America and the Far East. 30 billion pieces of content were shared each month. An additional 2 million sites link with FaceBook. FaceBook’s value has been set at $230 billion, in part based on solid advertising revenue. Though advertising has not been as monetised as much as Google’s (¢40 per month per visitor for FaceBook compared with Google’s $2.11) investors bought into Mark Zuckerberg’s entrepreneurship in the IPO which gave him control over voting rights. All this despite the recent concern over the ‘over-valuation’ of FaceBook at the time of its IPO in May 2012. Google Number of monthly unique Google searches was 1,170 million (November 2013). This gave Google a 75% share of the online search engine market. Number of monthly unique visitors to Google sites amounted to 187 million (March 2014). There were 300 million monthly active users of Google+ (October 2013). The global number of Gmail users has been put at 900 million, and number of Google+ unique mobile monthly users was 20 million (October 2013). The average duation of a Google+ visit was 3.46 minutes (August, 2014). (source: Google web site). Google was valued at $23 billion when it went public in 2004.
186 Chapter 15
Social Media and Social Networking
Twitter In March 2013 a total of 300 billion tweets were sent, and this amounted to 500 million per day (January, 2015). There are 302 million active users (April, 2015), with 36 million unique monthly visitors to the desktop version per month (September, 2013). Black people, who account for about 12% of the U.S. population, made up 25% of the Twitter population. Though 30,000 people a day were signing up to tweet, this did not match market expectations (resulting in change of top management). The company was valued at $23 billion (2015). Instagram There are 300 million Instagram active monthly users, or 75 million per day. The number of US-based Instagram users in 2015 was 77.6 million, and this amounted to 28% of the US population. 13% of Internet users also use Instagram. FaceBook paid $1 billion for the company. There has been a rapid growth in the photobased Instagram to the extent that it is challenging the text-based FaceBook and Twitter for acceptance (Radio 1, December 2014). Wikipedia Wikipedia includes 5 million articles (or 36.5 million pages) which are being added to by 750 new articles each day. The average edits for each page is 21.25. The number of registered users is 25.5 million, of which 122,000 have been active in the past 30 days. Wikipedia is seeking voluntary financial support to keep the service running (November 2012). Yahoo Yahoo has 1,000 million monthly active users. 139 million of these were using mobile devices (45% of which were on tablets). Yahoo has 10.4% share of the US online search market. It generated $1.78 billion in search ad revenues in 2014. Its market value in 2011 was $19 billion. Bebo BlogEarlyBlogOften or Bebo was launched in San Francisco in January 2005. Bebo had 22.9 million unique visitors and 10.3 billion page views in February 2008. It was bought by the Birch’s in 2008 for $850 million. This was prior to
Numbers of Users of Social Media
187
Bebo exiting the online search market and switching to provide app support services after the company declared bankrupcy in July 2013. QZone QZone has 644 million active users, the majority of which are based in China (May 2014). It now accounts for 40% of the world’s social media users. LinkedIn During the first quarter 2015 LinkedIn had 364 million members, up from 296 million in the year earlier. It is a professionally-focused media platform. In 2014 LinkedIn generated $2,220 million in revenues, most coming from talent solutions, online recuiting, marketing solutions and premium subscriptions. In 2009 its revenues were $120 million. Half of LinkedIn’s membership is in USA, half international and 60% are academics Mendeley As of November 2012, Mendeley had 2 million users. 31% were from biosciences and medicine and 16% were from the physical sciences. 13% were engineers and 10% computer and IT specialists. On average each user collected 142.8 papers and spent 1 hour 12 minutes per day studying the literature. It was acquired by Elsevier in early 2014. MySpace There were 50.6 million monthly active users and 300 million monthly videos viewed. (January 2015). This was down from a peak of users of 75.9 million (December 2008). Though it has been overtaken by FaceBook as early as April 2008 MySpace remains a top web property. There are 43.2 million monthly US visitors. Started in August 2003, it was acquired by Rupert Murdoch (News International) in July 2005 for $580 million. It has 150 employees (January 2015) which is well down from its peak of 1,600 employees prior to its financial difficulties. It was bought by Specific Media/Justin Timberlake for $35 million in 2011.
188 Chapter 15
Social Media and Social Networking
YouTube YouTube has over 1,000 milion users. Half of YouTube’s views are made from mobile devices. It generated $4,000 million in revenues, and projects 4 billion video views per day. The number of monthly unique visitors to YouTube in the UK amounted to 19.1 million (September 2013). Tab. .: Demographic summary Summary (Jan )
Numbers users
Percent of total
Year-on-year growth
World total Population Active Internet users No. Active Social Media accounts Number Unique Mobile users Active Mobile Social Accounts
, million , million , million
(% urbanisation) % penetration % penetration
+.% +% +%
, million , million
% %
+% +%
Sources: Statistics from a variety of news reporting sources including The Times Business dashboard, November , . Statistical Portal Statista .
The number of users is high (Tab. 15.4), and growing continuously. In total, the Statistical Portal (subscription service) estimated that the number of social network users in 2014 was 1,700 million. It is estimated that this would rise to 2,440 million in 2018 (source: Statista 2015). Many of these include Unaffiliated Knowledge Workers. That they have become so large is all the more striking given the short period of time that the Internet/Web has been in existence. They have exploded onto the scene from nothing a decade or so ago. It shows how much of a head of steam is building up within society changing the way people communicate. This has become a powerful infrastructure on which a variety of new social media and information systems have been based. It has also changed the nature of the information industry. One of the emergent players – Amazon – now calls itself a ‘real company in a virtual world’. Google has changed its corporate mission from ‘don’t be evil’ to ‘organising the world’s information and making it accessible’. They are positioning themselves to have a greater role in a digital communication industry.
Comparison with Existing Journal System
189
Comparison with Existing Journal System As an indicator of relative scale, scientific journals have subscriptions in the high hundreds/low thousands whereas the above are dealing in millions of users. Wiley journal subscriptions average out at 874 and Elsevier 1,112 for their respective journal programmes (see Annual Reports). Even allowing for multiple readers of a journal subscription, there is still a huge gap between the reach of social media and that achieved by a subscription-based journal publishing system. The relationship between the main social media sites and the Internet can be seen in Fig. 15.3.
Elsevier (0.001 mil)
INTERNET (2,300 million) MySPACE (200 million)
Wiley (0.001 mil)
LINKEDIN (150 million)
GOOGLE (62 million)
FACEBOOK (1,000 milllion)
Wikipedia (16 million)
TWITTER (288 million)
INSTAGRAM (330 million)
Flickr (51 mil) YOUTUBE (48 million)
EBAY (130 million) iTUNES (500 million)
Fig. 15.3: Social Media sites by numbers of users
According to Outsell (Outsell, 2009), annual revenues generated from English-language STM journal publishing were estimated at $8 billion for 2008, up by 6–7% compared to 2007, within a broader STM publishing market (including non-journal STM products) worth some $16 billion. In 2011 Google alone had revenues of $37.9 billion – well in excess of the size of the entire STM information industry. In 2014
190 Chapter 15
Social Media and Social Networking
FaceBook had estimated revenues of $12.5 billion. This shows how a user-focused information service can gain traction very quickly which could lead to a massive restructuring of an industry sector.
Internet’s Reach Much of the new social media is based on and dependent upon the infrastructure provided by the Internet. Data provided by InternetWorldStatistics.com on July 29th 2012, reveals the global penetration of the Internet (see Tab. 15.5). Tab. .: World Internet Usage and Population Statistics World Region
Population ( est.)
Internet Users
Penetration (% of population)
Growth –
North America Oceania and Australia Europe Latin America Middle East Asia Africa
,, ,,
,, ,,
.% .%
+% +%
,, ,, ,, ,,, ,,,
,, ,, ,, ,,, ,,
.% .% .% .% .%
+% +,% +,% +% +,
Total
,,,
,,,
.%
+%
Sources: Internet Usage and World Population statistics are for December . Population numbers come from US Census Bureau. Internet usage data comes from International Telecommunications Union.
Internet Users in the World Distribution by World Regions - 2014 Q2 10.5%
10.2%
Asia 45.7% 9.8% 3.7%
19.2%
0.9%
Europe 19.2% Lat Am/Carib. 10.5% North America 10.2%
45.7%
Africa 9.8% Middle East 3.7% Oceania/Australia 0.9%
Fig. 15.4: Global distribution of Internet use
Changing Face of Scientific Communication
191
The regional penetration of Internet use throughout the world can be seen in Fig. 15.4 above. The use by country is given in the following table (Tab. 15.6). International Telecommunications Union – Facts and Statistics. http://www.itu.int/en/ ITU-D/statistics/Pages/default.aspx. Tab. .: Top Internet using countries () Country
Number of Internet Users (in millions)
China USA India Japan Brazil Germany Russia Indonesia United Kingdom France
. . . . . . . . . .
The high number of Internet users in China is highlighted in the above tables, and leads to speculation as to the role which China will play in the future development of scientific communication worldwide. More generally, BRICs (Brazil, Russia, India, China), VISTA (Vietnam, Indonesia, South Africa, Turkey and Argentina) and other developing nations have a massive thirst for the Internet and the online services which are built on it. Their role in dictating the future direction of sci/tech research and information dissemination could also be critical.
Changing Face of Scientific Communication Built around the Internet and the Web, social media and other online information services inform specific communities about what is available in the open, public domain. However, the revolution which has swept through the entertainment and general interest areas – areas in which UKWs are also often active – does not yet appear to have made its full impact on the specialised scientific publishing sector. In practice, the ‘long tail’ represented by UKWs does not sit well with the subscription/licensing process preferred by STM journal publishers. The alternative of an open and free information system is central to the Internet’s rapid adoption. This is where some of the impetus to change may come from – the
192 Chapter 15
Social Media and Social Networking
intersection of new demographics, new search and retrieval systems with new interactive media and delivery procedures. The beneficiaries could, ultimately, be Unaffiliated Knowledge Workers. One of the issues is the rise of author-to-author direct communications, the ‘by-pass strategy.’ Web 2 has created a Twittering and blogging society, and it could be these developments which could rub off on sections of the research community and undermine the need for high priced formalised journals and articles. But, at present, for the core academic and R&D market, the ‘subscription’ and ‘article’ economy based on licensing, still reigns supreme. However there are clearly social media developments in society in the pipeline which need to be monitored. Because of the rapid global expansion of information-based transactions and interactions being conducted via the Internet, there has been an ever-increasing demand for a workforce that is capable of performing information-related activities. The Internet and the web browsers have changed everything.
The Internet Culture As indicated earlier, there is a cultural divide between those generations which grew up before the onset of digital technology and those who know no other world than one in which interactive digital technology has become a central feature of society. Each generation has its own unique requirements for information, in many respects defined by major events which took place in their formative years, but also to the availability of the new communications media technology. This is particularly true of the Generation Y, when broadband, iPods, mobile phones, laptops and iPads became ubiquitous and essential features of life and determine much of the way people communicate. The key functions which arise in this new digital mindset are ‘connections’ between information artefacts; ‘links’ between items; ‘transparency’ and ‘openness’. These are powerful features of the Internet culture. Then there are virtues such as ‘publicness’, ‘generosity’ and ‘listening’ which build up Trust in the communication system. There are also other ingredients such as efficiency and technical competency. In business terms the emphasis is on ‘market niches’, ‘platforms’ and ‘networks’ rather than brands. Brands, which became the Holy Grail for scientific journals, are no longer valuable assets, and what has become important instead is establishing a relationship between producer and consumer. This relationship has been identified as ‘prosumption’, an active collaboration between creators and users, and not supply-driven or just foisting something on a static unresponsive
Sharing of Results
193
market. Speed and abundance of information also help to distinguish the Old (print based) from the New (digital based) information environments.
Sharing of Results Underpinning Internet cultural change is the premise that researchers are prepared to openly share their information and knowledge with others. In the past, it has been assumed that most scientists – as authors – would jealously guard their original data sets. “Their data is the raw record of experimental observations and may lead to important new discoveries” (Neilsen, 2011). It is their special edge over the peer group with whom they compete for funding, promotion and international recognition. There is uncertainty about whether sharing of data will lead to the same level of kudos being given as there is to the publication of an individually created and cited research article. The history of scientific development is replete with examples of scientists stealing data from others, plagiarising other works. It is the black side of publishing. With the existence of the Internet and more data being produced, is it still the case that there is little sharing of data about research projects? Why should scientists be willing to share their data through social media and open access? What are the motives and the extent of the driving forces to share? Why would the community of researchers who are part of the Sloan Digital Sky Survey (SDSS), for example, be so willing to share data produced by powerful global telescopes and distribute to all and sundry and for SDSS not keep the results to just a favoured few? Why does the Ocean Observatories Initiative share their data so freely about the floor of the Pacific Ocean, and not keep it among the funders? Or why is the Allen Brain Atlas from the Allen Institute of Brain Sciences also so willing to share its data with a wide global audience? There is a precedent with Human Genome Project and the haplotype map.¹ The precedent is that sharing data does produce better results than keeping data within a controlled and limited group. This is particularly the case in those areas where there is a massive investment, involving millions of pounds, in a particular research project. Notably also when the project is multidisciplinary in scope. However, as more data is shared online the traditional relationship between making observations and analysing data is changing. Historically they have been the same but nowadays it is becoming popular for analysis of the 1 The haplotype map (HapMap) of the human genome aims to describe the common patterns of human genetic variation.
194 Chapter 15
Social Media and Social Networking
data to be carried out by specialists outside the data-creating laboratories. New professions have emerged to cater for data analysis – bioinformatics, cheminformatics and astroinformatics, for example. These are disciplines where the main emphasis is not on doing new experiments but rather on finding new meaning within existing data. The profession of data and text mining is coming to the fore. There are significant barriers to sharing, however. The research community is still faced with the situation that sharing information and data is not as ingrained in the individual’s science psyche as some of the large collaborative projects would indicate. Most career minded scientists have little incentive to contribute to open-sharing sites and instead focus their efforts on doing what has been traditional over decades – to ‘publish or perish’ – to create articles which are published and cited and give international recognition for the research efforts of the scientist concerned. The fear is that open shared sites create opportunities for even more plagiarism, for stealing results before they have become attributed to the original author, for allowing false and bad results to be disseminated. Another barrier to sharing is presented by research which could lead to the creation of patents or developing a commercial product. This is where basic research, traditionally undertaken within universities, comes up against the more proprietary aspects of applied research, traditionally undertaken in industry but also increasingly within universities under contract. It requires a change in behaviour and administrative procedures within corporations and research institutes if sharing is to succeed across the broad range of disciplines which have an industrial application.
‘Networked Science’ Nevertheless, arising from greater collaboration and global cooperation is ‘networked science’. It brings together in one space all those interested in a particular topic using the power of telecommunication technologies. However, networked science is currently being constrained by a closed scientific culture, as indicated above, that chiefly values contributions in the form of scientific papers rather than a complete and open dialogue and disclosure of research results before publication. Some of the change in attitudes can be brought about through compulsion, through forcing a change to occur. This is being done by grant awarding agencies insisting that – in order for a researcher to get additional funding in future – they deposit their published results in a Green Open Access repository or a Gold Open
Open Science
195
Access journal. Some agencies, such as the U.S. National Institutes of Health, go further by stipulating in which open access medium their work must be made available (e.g. in PubMedCentral). For networked science to reach its full potential the requirement is for scientists to make an enthusiastic, wholehearted commitment to new ways of sharing knowledge. The funding agencies can play a significant part in making this happen, although some are less proactive than others. Such a top down, centralised means of enforcing change may not go down well with researchers. To effect change may require an additional bottom up approach as well, to convince researchers, readers, librarians and other intermediaries that an open policy is best for all. This is a major cultural shift, as much educational as promotional. It requires a different approach. It needs some of the agents listed as driving forces effecting a ‘tipping point’ (Gladwell, 2000). According to Neilsen (Neilsen, 2011) a revolution is occurring in the way knowledge is being created – we are reinventing discovery. He suggests that there are two periods for science – pre-networked science and open or networked science. The latter involves a commitment to sharing not evident in the earlier phase, but nevertheless leads to cooperative discovery. There are many examples where there has been a groundswell of activity to create an open information service. Underpinning these practical examples is the notion that sharing is good for the community. Its openness means greater involvement by a greater number of people from all walks of life. This includes Unaffiliated Knowledge Workers.
Open Science The full potential of concepts such as the data web, citizen science and collaborative markets currently remain unrealised. As mentioned by Neilsen (2011) “To reach its full potential, networked science must be open science”. It requires openness, not only of the published text but with all that went before it, including raw data, lab notes, blog collaborations, commentaries, presentations – everything which supported the project’s progress. Open science also requires some aspects of the new communication possibilities on the Internet to be integrated into the science research cycle. These include: – A modular approach, in which the research project is broken down into separate and discrete parts – Free use of other researchers’ work – enabling science to be ‘built on the shoulders of giants’, and not restricted by a dated business model for access.
196 Chapter 15
– – –
Social Media and Social Networking
Micro-contribution to be encouraged, so that individuals with just the smallest amount of research input can become part of a vibrant larger community Shared praxis should be developed whereby the culture, standards and protocols for advancing research in particular areas are well understood. That a common knowledge-base is established.
Openness as reflected above is a prerequisite to unaffiliated knowledge workers benefitting from the research effort undertaken within the community. Besides implementing mandates by funding agencies to support openness, how else can ‘openness’ be incentivised in science communication? One suggestion is to adopt a similar refereeing or filtering system which is used in formal journal publishing – citations – measurement – archiving, and is also adopted in related areas such as data and software. This does not exist at present for networked science publishing. There are aspects of human nature which glory in being part of a larger whole – as witnessed by the support given to Wikipedia, not only among scientists but also amateurs. In the early days of Wikipedia, it was seen as a frivolous activity to write items for inclusion. This has changed over years and it has become legitimised in the eyes of scientists. As Neilsen (2011) comments, given that the Internet, the web and various online tools such as email were so enthusiastically adopted by the scientific community, why is it that they took such a long time to adopt Wikipedia? The reason is that the Wikipedia, GenBank, etc. are essentially conservative systems geared to sustain projects in the service of conventional goals of writing scientific papers. They are willing to use unconventional means – blogs, wikis, peer-to-peer – to achieve what is basically a conventional end (publication of a research article). However, such public sharing of information is not universal. There is, for example, no worldwide agreement to share data about influenza viruses. Nonhuman genetic data is also not being shared widely. Researchers in these areas are fearful that by sharing research data this will give competitors an advantage. In some cases leading to the stealing of the research data, or rushing into print by competitors with plagiarised versions of the author’s work. This concern is an obstacle to sharing.
Science and the Media There is a further challenge in getting access to credible research results. This is when science news is reported in scientific magazines or newspapers in a journalistic rather than scholarly style. It is often questionable whether such
Science and the Media
197
scientific reporting is accurate or whether it is biased and comes with the writer’s (or publisher’s) agenda. As an example of the challenges being seen in this area, Sir Paul Nurse, Director of the UK Centre for Medical Research and Innovation (UKCMRI) and President of the Royal Society, presented a programme for BBC Horizon in January 2011 in which he felt that scientists had not done well enough in getting the message over about the results of their research. Instead, the results are often cherry picked by the righteous zealots in order to hijack the real message from the research purely for public consumption (or to sell newspaper copy). Though a healthy scepticism of research results is laudable – for example, over issues such as global warming or genetically modified crops – the alternative position of obstructive denial is not good for innovation or social progress. It has become a case that ‘a point of view is adopted rather than peer review’, and those who do not follow peer reviewed publications are left to make interpretation on interpretations. There is, according to Nurse, the need for the conspiracy theorists and the peer review to be brought closer together, and for the public not to be fed conflicting and hysterical messages through public media. Scientists are not great at managing the media message. Nor are existing publishers in delivering the message to the public at large (see the Nautilus concept described in chapter 17). Blogs and wikis still operate in a world which rarely interacts with formal scholarship. Political manipulation of scientific results to fit in with the political leanings of a newspaper or magazine is not a good basis for the promotion of science. Free and open access to data means that scientists are not the only interpreters of the data. Anyone can have their say, however poorly considered and whatever the bias of the author. Here the ‘wisdom of the crowd’ takes over (see Surowiecki, 2004). It is challenged by Keen (Keen, 2007) who lamented about the growth of the ‘culture of amateurs’ and the noise which such a fully democratic scientific information system would produce. Access to and interpretation of research results is a barrier to effective scientific communication to society in general which needs to be addressed through an efficient and accepted method of review to ensure quality is maintained but not at the expense of timeliness or impact. In the February 5th 2014 issue of The Guardian, Andrew Brown wrote a critique of the STM publishing system (Brown, 2009). He tackled the issue of the closed system of publishing for scientific research, with high priced journals dominating the scene. He felt this is bad, and also “extremely bad for anyone outside a university who may want to learn, and that’s a situation the web has made more tantalising”. Almost all refereed research journals are indexed and references to them can be found on Google Scholar, PubMed Central and Wikipedia. So the facts and truth are out there. But it will cost the unaffiliated individual a lot of money to
198 Chapter 15
Social Media and Social Networking
gain full access to the published results. One answer to all this, claims Brown, is to promote the growth of free scientific publishing, and also greater sharing of, and free access to, the immense quantities of data that lie behind most published papers.
Science and the General Public The term ‘knowledge worker’ was first coined by business guru Peter Drucker in his book ‘Landmarks of Tomorrow’ (Drucker, 1959). His definition of a knowledge worker was a person who works primarily with information or one who develops and uses knowledge in the workplace. A leading source for the development and use of scientific information is through the higher education system. Today’s public is increasingly educated and engaged. As suggested earlier, by participating in collective knowledge projects some parts of the general public have embraced and become part of the scientific process. The World Community Grid signs up volunteer computers.² Other projects such as Wikipedia and SETI³ turn to volunteers for input via their computers. Through Folding@home, 40,000 PlayStation 3 volunteers help Stanford scientists fold proteins. In ReCAPTCHA, amateurs help digitise The New York Times’ back catalogue. In the ESP project, the public has labelled 50 million photographs which train computers to think. In Africa@home, volunteers study satellite images of Africa, to help the University of Geneva create useful modern maps. Conservation biology, a vast academic field, depends on amateur surveys, both outdoors and in historical collections. At Herbaria@home, for example, volunteers decipher herbaria held in British museums. The Bentham crowd sourcing project uses volunteers to prepare high-quality digital images of University College London’s collection of unpublished manuscripts written by the philosopher and reformer, Jeremy Bentham, which runs to some 60,000 manuscript folios. Much of this crowd-sourcing, or mass voluntary participation, is just ‘grunt work’. It is basic lab-assistant-type activity that often involves image recognition. Scholars engage less with the ‘hive mind’ – the public – when it comes to more complex or interpretative work. There are exceptions. For example, in Israel, the
2 World Community Grid (http://www.worldcommunitygrid.org/) enables anyone with a computer, smartphone or tablet to donate their unused computing power to advance cuttingedge scientific research on topics related to health, poverty and sustainability. 3 SETI is the collective name for a number of activities undertaken to search for intelligent extraterrestrial life, SETI projects use scientific methods in this search.
Science and the General Public
199
Rothschild family and others are pioneering a project to put the Dead Sea Scroll fragments into a public domain website, thereby engaging with religious communities that have unparalleled language skills. However, the scientific community has hitherto not made available to the public its ‘core’ research material. The public only has to try accessing the research databases via Google instead of through a university account to experience what hitting an information brick wall is like. Many STM publishers make clear that they are commercially owned and thus debar access to all those who have not paid for a right of access. Few academic databases and research tools are in the public domain, even though the public has indirectly paid for their content – through research grants, tax breaks, and donations. Nor is the higher-order academic commentary available to the public. It is equally problematic that JSTOR, the database of most twentieth-century scientific articles in the social sciences and humanities, is off-limits for the public because of publisher-protected copyright laws. The suicide of the computer wizard Aaron Schwarz in September 2013 after being prosecuted for downloading articles from JSTOR without authorised access is a particular tragic consequence of such a barrier facing individuals. This is where the battle lines are being drawn. How will STM publishers in future meet the challenge of providing an increasingly educated public with access to the publications which their taxes have been responsible for creating? If the system changes to accommodate a wider dissemination of what is currently ‘closed’ information, then the current attractive business model for publishers needs to be revisited and there is no guarantee that the future margins and profits will be at the same level as now. The only entities who can afford to purchase expensive output of the publication process are research libraries attached to higher education institutes, universities and corporations. But the unique feature of this is that, although these research libraries buy the books, journals and related services, they do not use them. There is a split between those who want the publications – but can’t afford them – and those who buy them (the research libraries) but do not use them. As Dr Peter Murray-Rust says eloquently in a listserv message (LibLicence on 30 April 2012): The idea that there is a set of “researchers” in Universities who deserve special consideration and for whom public funds must be spent is offensive. I fall directly into SH (Stevan Harnad’s)’s category of “the general public”, whom he now identifies as of peripheral importance and thankful for the crumbs that fall from his approach. I have worked in industry, work with industry and although I have been an academic am not now paid as one. The idea that I am de facto second-class is unacceptable, even if you accept the convoluted logic that this is necessary to achieve Green Open Access.
200 Chapter 15
Social Media and Social Networking
There are no areas of science and more generally scholarship which are not in principle highly valuable to “the general public”. I am, for example, at present working in phylogenetics – not a discipline I have been trained in – and I and my software wishes to read 10,000 papers per year. Most of these papers could be of great interest to some people – they detail the speciation of organisms and are fully understandable by, say, those whose hobby is natural history or those with responsibility for decision making. (Murray Rust, 2012)
Peter Murray-Rust is a supporter of publicly funded Gold OA and of Green domain repositories. He is not prepared for these to be dismissed ex cathedra. Both work well in the areas he is acquainted with – he is on the board of PubMedCentral and also on the board of a BOIA-compliant Open Access journal (where, he claims, half the papers come from outside academia and are every bit as competent and valuable as those written by academics). There is an increasing amount of scholarship taking place outside Universities and not reliant on the public purse. Wikipedia is an example. It is notable that uptake of publication-related tools such as WP, Figshare, Dryad, Mendeley, etc. is high, because people actually want them. Dr Murray-Rust would like to see effort on information-saving and sharing tools that people need – all people, not just the academics. The Internet and Web gives them the opportunity to use tools and services to become much more involved than in the past. There are over 200,000 participants in Galaxy Zoo⁴; over 75,000 in Foldit, a part game/part scientific online service.⁵ No expensive equipment is needed – just a home computer or smart phone. Barriers to entry to communicate about science have dropped. These are people who belong to the Unaffiliated Knowledge Worker (UKW) community. The Internet has given them the tools to find scientific information which hitherto had been denied them. Their eyes have been opened, and they are champions of a new open and collaborative network, not just within the U.K. but also globally.
Summary The evidence is that there are a number of trends which are resulting in a revolution in the way scientific information is being handled. Thus far the impact of the social revolution on UKWs as creators and users of scientific/technical information has not been dramatic, despite the power of some of the social concepts and social media described in these last chapters.
4 Galaxy Zoo (http://www.galaxyzoo.org) uses crowd sourcing to classify images of the galaxies 5 Foldit (http://fold.it/portal/) is an online puzzle video game about protein folding.
Summary
201
In part this may be a result of technological progress not having created the appropriate platforms and processes sufficiently attractive to UKWs. Or that there has been insufficient motive for researchers to change their behaviour – that the tipping points have not been reached. That there is a mismatch between social and technical developments. However, the sheer numbers of registered users of social media services suggests that it is a social revolution waiting to impact on scientific information. As the digital natives converge with new information platforms facilitated by social networking and media so UKWs would appear to be potential beneficiaries. However change is not being promoted by social trends alone. Technology is also a powerful agent of change. The available technical developments which could underpin the dissemination of STM information will be explored in the next section.
Chapter 16 Forms of Article Delivery Introduction The following chapter looks at some of the services currently used to disseminate research results. These relate to a time when the print paradigm reigned supreme and research publications were targeted to an elitist audience within academia. Indications of a break away from such a traditional, conservative approach to enable UKWs to become active participants will be highlighted later.
Role of the Journal Over centuries the scientific journal has become the main channel for disseminating quality-controlled results within the research community. Journals performed four key roles which led to their dominance as the media channel to researchers – – they registered the results of a research project; – they certified that the results were correct (through refereeing); – they provided archival access to published results and – they disseminated the results to a global research audience. In addition, and more recently, journal publishers – formalised the production of research reports, – Improved presentation through offering copy-editing skills – made them available online and making sure that it was visible to all through Internet channels – forged links to related work In a print world and in the early days of digitisation the original functions were what the research community wanted. But as the Internet, the web, and other digital support services grew powerful, the relevance of some of the core functions come under scrutiny. Fast communication of latest research data is a feature of a rapidly moving scientific environment. The traditional printed journal format does not support this easily. By focusing on creating a quality product (the article’s Version of Record) through the certification process (refereeing), in-house desk-editorial
Role of the Journal
203
corrections, and physical printing, a time-lag was built into the printed publication of the article in a journal of several months, in some cases years. This is not what modern science needs, nor indeed what researchers who have been interviewed want, so alternative mechanisms for communicating the latest research output have emerged. New communicating formats, using social media and social networking, workflow procedures, collaboratories, datadriven, etc, have proven to be more effective in the digital environment within which authors/researchers are now operating. The refereeing system in particular has come under scrutiny. In the print world, blind refereeing of a manuscript by at least two knowledgeable experts was the basis for ensuring quality was maintained. Despite a number of welldocumented individual failings and frauds, the refereeing system worked well. But it is slow and ponderous. Whilst still relying on an assessment by one’s peers, online refereeing including pre- and post-publication assessments have been tested. The principle behind opening up a manuscript to wider scrutiny lies in the ‘wisdom of the crowd’ over the original judgement by a (possibly partial or overworked) expert (Ware, 2005). The journal still has a role to play, but its prime role is that of conferring recognition, reward and authority on the author of the article. On the basis of having an article published in a reputable journal, an author can use his/her publication record to gain peer recognition, career enhancement, and additional funding. All these are essential functions, but not critical for those end users and knowledge workers who are seeking to keep up-to-date with the latest research developments, particularly those at the edges of the academic system. To quote a comment by Jan Velterop (an ex-traditional publisher) on the Global Open Access List on 21 June 2012: We use journals not for conveying information, but for protecting scientific reputations and for fostering career prospects.... Hanging on to the old (subscriptions) in order to achieve the new (open access) may have been considered a suitable strategy ten years ago, but what it delivers is at best a form of open access that’s likely to be merely ‘ocular access’ and of limited use to modern science, in contrast to the benefits that come with a radical change to full open access (no rights limitations, commercial or technical), not just to the equivalent of text on paper, but to all the potential that can be released from text, tables, graphs and images in electronic format.
Another view, from a practicing biologist (Ross Mounce), appeared on the GOAL listserv on 9 November 2012: I don’t need ‘journals’. I just need effective filters to find the content I want amongst the ~2million papers that are published this year, and the ~48 million from all years previously (basing figures on http://dx.doi.org/10.1087/20100308).
204 Chapter 16
Forms of Article Delivery
The role of the journal is made more complex as a result of the many spin-off services which have emerged. Fig. 16.1 shows the relationship between the various formats as part of the scientific method for disseminating research results.
The Journal and the Scientific Method Private
Create
Co-workers
Invisible college
Speciality
Discipline
Public
research
1st draft
Discuss & revisit
Draft mss
Draft for comment
Seminar/workshop/conference Pre-print
Criticism Public evaluation
Confirmation
Acceptance & integration
Science journalism Peer reviewed paper in a journal
iew Rev er pap
reference
prizes
Work monograph
history textbook
Source: Michael Mabe, STM Association, Chief Executive. (personal communication)
Fig. 16.1: The journal and the Scientific Method
A further issue which challenges the role of the scientific journal is that of ‘negative results.’ It is as useful to publish the results of research which fail to prove a point as it is to publish results which are supportive. Negative results indicate pathways which researchers need not go down as they end in a brick wall and disappointment. Time and resources could be saved by not following others down this road. Publishing ‘negative results’ could be considered something of a failure, and as such researchers may be unwilling to lend their name to the results. There is little kudos to gain from publishing such outcomes. Negative results now account for only 14% of published papers, down from 30% in 1990, according to a report in The Economist of 19 October 2013 (Economist, 2013). Similarly, complaints have been made in the pharmaceutical sector that published research cannot be replicated. Again, The Economist in the same
Future of the Subscription-Based Serial Model
205
article claims that biotechnology venture capitalists found that half the papers they analysed are unrepeatable. Amgen found that they could only reproduce 6 of the 53 ‘landmark’ studies in cancer, and Bayer found that they could only repeat 17 of 67 similar important papers. Whether there is a rapid migration from the traditional journal to new formal and informal ways of communicating research – i.e., usage of information – will depend on how willing the research communities (each discipline having its own approach) effect a change in their information gathering and disseminating habits.
Future of the Subscription-Based Serial Model It has been suggested by eminent pundits (see chapter 8) that the current situation in the scientific publishing garden is not particularly rosy. There are complaints about the way journal publishers have been ‘exploiting’ the research community in their quest for profitability and market share. The earlier claims are that the journal is an outdated form of communication in an Internet world. There are also issues about fragmentation of science and that no one model could hope to fit all end users needs. A serial ‘subscription’ is, in library terms, a ‘publication that is intended to be continued indefinitely’. The complaint is that this is not a valid business model on which the scientific communication process should rely in future. For example, when a library subscribes to a journal, it is saying to the publisher “I’ll pay you up front to send me all the articles published in a Journal X for a year, regardless of how many of the articles turn out to be of any actual use or interest to my patrons.” In the print environment the library community had no choice but to buy articles that way, but in the online world that level of built in waste is no longer necessary, and the library’s shrinking budgets are making it much harder to justify. It makes much more sense to pay only for those items that are actually wanted and get used. There are several problems with this approach, one of which is that we are functioning in a scientific economy that has been shaped by what was appropriate in the printed world. Publishers cannot make as much money selling only those articles that people may want as they can selling articles in pre-packed bundles. There is a big gap between usage (however defined) and the total number of sales made through it being included as part of a journal subscription package. There have been anecdotal suggestions about how many people read a published article in a traditional scientific journal. One expert suggested that it was 6 readings per article (King Research) though this was subsequently questioned
206 Chapter 16
Forms of Article Delivery
as it did not take into account the browsing value which articles have. Others have been more cynical and suggested that an article is read only 1.5 times – once by the author and half by the author’s mother who failed to read it in its entirety! These unsystematic observations aside, it means that should greater efficiency in article delivery be implemented those publishers that have benefitted most from the subscription model could suffer the greatest. According to some library pundits such as Rick Anderson, Dean of Libraries, University of Utah (see listserv on 26 October 2011 and Anderson, 2013b), the long-term solution will not involve libraries paying for articles their patrons do not want, because the money to do so is not there anymore. Are there any indications that the scientific publishing sector is migrating away from the business models favoured in the printed world towards something more appropriate to current digital conditions?
Alternatives to the Journal New discovery tools, especially gateway services such as Google Scholar, PubMed, Scirus and the Web of Science, have made research literature more visible to more people more conveniently than ever before, but discovery and access is not the same thing. Researchers vented frustration over the limited range of journal titles available to them at their institution in the free text comments section in the Rowlands, 2011 survey. A key issue here is the tension between the ‘article economy’ (what readers want) and the ‘journal economy’ (the dominant business model for information supply in the form of subscriptions or site licences). The provision of simple (preferably free or inexpensive) mechanisms to deliver information at the article rather than journal level would extend the reach of much research (Brown, 2003). Blogs and wikis and moderated listservs and blogtalkradio and online seminars and webinars have emerged as examples of the new grass roots of information creation, revelling in the democracy which the Internet has brought about, and available without charge, and deliverable in a manageable format. The new alternatives are described more fully in the next chapter. What does this trend to alternative, non-journal publishing, mean for science? It suggests a new form of communication which could/should become the cornerstone of scientific research output, a communication system which is referred to as social publishing, based around Web 2.0 or its derivatives. It is seen in its practical guise when the power of the masses produces something which
Document Delivery
207
individual experts would not achieve easily on their own. Should an alternative refereeing process ever take root in scientific communication then one of the key planks in support of the current practice of STM publishers disappears. So far it has not happened, but is a structural issue facing the role of publishers over the next five or so years.
Document Delivery The ubiquity of easily transmitted digital content has taken document delivery from a high-touch, labour-intensive service to a quick-turnaround activity. Document delivery (docdel) has been a feature of the science communication landscape since the early 1950s when the National Library for Science and Technology was set up as a dedicated document delivery centre at Boston Spa in Yorkshire, England, under the leadership of Donald Urquhart. It was subsequently brought within the British Library in 1973. Since then it has seen its operations increasingly dictated by the commercial interests of publishers as much as by the needs of end users for an individual article. Currently the British Library is one of a number of key players in the document delivery business. There are a dozen or so agencies operating docdel services internationally, half of which are private initiatives and the other half affiliated with national scientific organisations such as in France, South Korea, China and formerly in Canada. Document delivery has two main procedures. Primarily, document delivery is a crucial service within information centres. The information centre receives requests for documents – in the form of citations, bibliographies, and even hastily scrawled notes – and the information unit fulfils those requests with articles, papers, patents, etc., retrieved from content sources licensed by, owned by and linked to the centre. When a search on these content sources retrieves no results, the information professional turns to an external document delivery vendor. The document delivery vendor uses its dedicated resources to find even the most obscure document, and then transmits it to the information professional. The information professional in turn forwards the item to the waiting end-user. The information centre pays the document delivery vendor a standardised per-document fee and in some cases passes these charges on to the researcher. Document delivery remains in a time warp, never successfully challenging the journal subscription, nor effectively serving the interests of UKWs. These services are on the periphery of traditional STM publishing. They exist but are constrained by their contractual relationships with both suppliers (STM
208 Chapter 16
Forms of Article Delivery
publishers) and customers (research libraries) from taking the lead in determining new directions for scientific communications.
Walk-in Access The number of higher education institutions (HEIs) offering non-library members access to their electronic resources has grown considerably in recent years. These ‘walk-in access’ services permit non-registered users to access content but only when the terms of the institution’s license agreements allow. For printed publications a more lenient system operates. Research libraries allow visitors to access their collections on a reference basis, whether they are staff or students from other universities or even members of the public. Many libraries, in support of the goal to widen access to research, operate local lending schemes through which members of the public may borrow items from academic libraries. Some librarians have also implemented reciprocal borrowing arrangements between university libraries. The largest such reciprocal borrowing scheme, SCONUL Access, covers most academic libraries in the UK and Ireland. Despite efforts by librarians to open up walk-in access to online or digital, as distinct from printed, publications, this material is often subject to stricter licensing conditions, limiting the amount of access to members of the institution alone. It is possible that Open Access publishing may reduce this problem but at present there is no critical mass of digital material which can be offered to walkin users.
UK Public Libraries A proposal was made by the Finch Group (RIN, 2012) which included publishers, libraries, universities and learned societies. This was an “Access to Research” initiative which would give UK public libraries online access to 8,400 journals, or 1.5 million articles, published by several of the major academic publishers, Elsevier, Emerald, IOP Publishing, Nature Publishing Group, Oxford University Press, Springer, Taylor & Francis and Wiley. Conference proceedings are also included. All content provided is digital and can be accessed from designated public library terminals. Access to Research has been launched under the leadership of the Publishers Licensing Society. Over half of UK local authorities have signed up their
Walk-in Access
209
public libraries to the scheme, which will initially run as a two-year pilot while interest is being monitored. Some open access advocates have criticised the scheme as a poor alternative to direct open access from users’ own computers (rather than through the library terminals) – particularly in an era of library closures and falling library usage. The Finch report mentioned that such a scheme: …would not, of course, meet the demand for access at any time and anywhere. But access free of charge to any user of a public library would provide real benefits to many people who at present face considerable barriers if they want to find authoritative information about research relevant to their interests and needs.
This includes unaffiliated knowledge workers. The Finch group expected the initiative to have a major impact – particularly if it was accompanied by a “clear marketing strategy” to alert people to its existence. Public libraries can also provide free unrestricted access to journal articles (and other scholarly resources) to users at their home (the Access to Research initiative is limited to ‘walk in’ use within the library) through services such as CORE¹ and BASE.² These services provide access to millions of articles. However, these services are not widely known by the general public.
NHS Libraries Jisc and a group of scientific journal publishers came to an agreement in May 2014 to allow staff at the National Health Service (NHS) centres to benefit from a national licensing deal. A year-long pilot will allow NHS staff free access to key medical and scientific journals. The aim is to support evidence-based healthcare and give healthcare professionals the opportunity to weigh up the latest developments. A steering group comprising representatives from the UK academic sector, Jisc, NHS Education for Scotland, the NHS in England, Wales and Northern Ireland, and the National Institute for Health and Care Excellence (NICE) is overseeing this pilot. The publishers who have agreed to take part include AAAS, Annual Reviews, Elsevier, IoP Publishing, Nature Publishing Group, Oxford University Press, Royal Society of Chemistry, and Springer S&BM. After the trial period of a year the steering group will determine how to take forward future opportunities for NHS staff to access research journals. 1 Available at: http://core-project.kmi.open.ac.uk/ 2 Available at: http://www.base-search.net/about/en/index.php
210 Chapter 16
Forms of Article Delivery
Alumni A few universities are working with their alumni departments to promote walkin access, and many institutions cite alumni demand as a reason behind setting up walk-in access. The publisher SAGE is (from 2013) making its content available to the alumni from institutions which subscribe to SAGE’s online services. In a change to its licences, university alumni registered with subscribing or purchasing libraries will be able to access the SAGE products, which includes over 645 scientific and professional journals, at no extra cost. Libraries had the option to opt in to this new license on renewing subscriptions in 2013. Elsevier has also announced the launch of the Postdoc Free Access Program, an initiative to help early career researchers who are in between research positions stay up-to-date in their field. It will offer postdocs a Free Access Passport, with complimentary access to journals and books on ScienceDirect for up to six months. In order to qualify for the Free Access Passport, candidates will be asked to fill out a form³ verifying their credentials. Once approved, they will receive a personal code allowing access to ScienceDirect. A common concern amongst librarians in institutions which have not set up a walk-in access scheme is that demand for the service will be large and that this will jeopardise the service offered to registered, fee-paying students and researchers. Walk-in access research conducted by SCONUL, shows that, while demand for walk-in access has been low, no walk-in access service has been actively promoted. The institutions which have launched a walk-in access service tend to refer to the service as a ‘pilot’ or a ‘soft launch’, intended to gauge demand and to test technical solutions and operational procedures. The only tangible promotion of walk-in access services employed has been to list them on library web pages. Walk-in access services, once effective promotional strategies are implemented, represent academic librarians’ efforts to make publicly funded research available to members of the public. Whilst we wait for a critical mass of journal literature to be made open access or a national licensing initiative to tackle the problem of barriers to research, academic libraries are provided a short-term solution to an immediate problem.
3 Available at: http://www.elsevier.com/postdocfreeaccess
Summary
211
Summary There are alternative ways of paying for research results – whilst the journal subscription remains supreme, the way they are packaged for particular markets such as the NHS, public libraries and/or alumini has become a growing area for analysis. Walk-in access to a physical (print) collection is the traditional option. This is one in which publishers prefer to support. They would remain in control. To some extent, these ‘re-packaging’ options are make-do developments, patching over an awkward system of journal publishing. However, in so doing they may also offer scope for extending the reach of scientific output beyond the academic sector. Nevertheless, there are other more significant ways research results can be transmitted. The next chapter identifies some of these alternatives.
Chapter 17 Future Communication Trends Alternatives to the Journal Article As reported by Velterop in a listserv on 9th November 2012 in an expose of the role of the journal: Very few journals are indeed ‘journals’ (in the sense of presenting ‘daily’ updates on the state of knowledge), except perhaps the likes of PLOS One and arXiv. So what we traditionally think of as journals have had their heyday. They functioned as an organising mechanism in the time that that was useful and necessary. That function has been taken over, and become far more sophisticated, by computer and web technology. That doesn’t mean journals, as an organising concept, will disappear anytime soon. I give them a few decades at least. I see articles also change in the way they are being used and perceived. They will more and more be ‘the record’ and less and less a means of communication. One reason is the ‘overwhelm’ of literature (see e.g. Fraser & Dunstan, on the impossibility of being expert, BMJ 2010). ‘Reading’ in order to ‘ingest’ knowledge will be replaced by large-scale machine-assisted analysis of, and reasoning with, data and assertions found in the literature. Organisation of the literature in the current prolific number of journals – and the concomitant fragmentation it entails – will be more of a hindrance than a help. Initiatives such as nanopublications (http://nanopub.org) and, in the field of pharmacology, OpenPHACTS (http://www.openphacts.org), are the harbingers of change.
The ARL (Association of Research Libraries, 2008) commissioned Ithaka Strategic Services to look at the various new formats which were becoming available and being used by scholars to communicate research results. Their report, “Current models of digital scientific communication”, was published in November 2008. It highlighted that there was a growing number of non-traditional ways to disseminate scientific information, and whilst still peripheral in most subject areas – with the mainstream refereed journal as the accepted mode of communication – it was nevertheless the case that there was an erosion taking place in information about scholarship. The ARL/Ithaka survey identified eight main types of digital scientific resources available at the time. These were: – E-journals. This includes e-journals which allow immediate access to newly published articles. Some e-journals include multimedia, data visualisations, large datasets (such as JoVE: Journal of Visualized Experiments). – Reviews. Though highly rated as a service it does take a long time to write and edit each review. Mainly of appeal in the humanities.
Alternatives to the Journal Article
– –
–
–
–
–
213
Preprints. Two key e-print resources include arXiv (physics) and Social Science Research Network (SSRN). Encyclopaedias and other reference works. Includes Encyclopedia of Life¹ which encourages contributions from the lay public, although subsequent vetting is necessary. Data. The Protein Data Bank² is an example of mass participation in creating a global digital resource. Support for these often comes from government funding. Blogs. Blogs are seen as updated versions of the traditional listservs. Unlike discussion lists, blogs are more tightly controlled in who are allowed to participate. Of value in that it gives frequent updates of researchers’ opinions rather than just facts. However they only represent interim stages, not the final result. Discussion forums. Listservs, message boards, etc are still used heavily in many disciplines Not used to work through ideas, however, more as a communication medium. Hubs. Combine a number of formats within a single portal. Frequently found in STM areas.
The above are not exclusive. They have some aspects of sifting or peer review built into their services as it was apparent to the authors of the study that quality control was an essential requirement of scholarship. The report was completed over seven years ago. It explains why the more ‘communicative’ forms are not included in their list – forms such as YouTube, FaceBook, Figshare, ResearchGate, Twitter, LinkedIn, etc. – all of which could become part of the scientific communication system but not involved in developing a record of scientific progress. Also excluded from the list were aggregations of links to other sites, software, digital copies of print content, industry newsletter and teaching-focused resources. Nor was Wikipedia which was seen more as a consumer-focused service. Each scientific discipline has its own experiences with adopting new information systems. Many digital publications are directed at small, niche audiences (supporting the ‘twigging phenomenon’). There were three main reasons given by respondents for their use of any of the digital resources. Primarily it was to enable access to be made to the most current research (which is a count against traditional journals given their long gestation period before publication);
1 Available at: http://eol.org 2 Available at: http://www.wwpdb.org/
214 Chapter 17
Future Communication Trends
facilitating exchanges among scholars (or networking); and obtaining useful co-location of works. All of which emphasise the need for speed and interactivity. The point in highlighting these new digital resources is that the unaffiliated (UKWs) have the chance to take part in such ventures, more so than they had in the tightly controlled and elitist journal publication system. It gives the unaffiliated the chance to sit at the same table as their academic peers. It is an example of the Internet supporting more democracy within the scientific information system. It provides the means for participation which has not existed in the past. But in order to participate there should be understanding of the content within published articles. The publication system would need to take into account not only the specialists, the elite within the discipline, but also the many other ‘long tail’ constituents who have the wish to be involved but are not quite as specialised in their education or training.
New Forms of Article Dissemination There have been attempts to granularise the journal, and come up with new ways of delivering articles. For example, Elsevier’s Cell Press publishing unit announced in late 2009 its ‘Article of the Future’ project. This involved collaboration with the scientific community to redefine how a scientific article should be presented online. According to Elsevier, the project took advantage of current online capabilities, allowing readers individualised entry points and routes through content, whilst also exploiting advances in visualisation techniques. This approach to semantic enhancement is “to restructure the presentation of the article to take advantage of the fluidity of the online medium, and to integrate text, graphics, associated data and other supporting materials, multimedia annotation and content in a more dynamic yet structured format.” Features of the project included a hierarchical, non-linear presentation of text and figures – readers could elect to drill down through the layers based on their current tasks in the scientific workflow and their level of expertise and interest. It also provided bulleted article highlights and graphical abstracts – readers could gain an understanding of the paper’s main message and navigate directly to specific sub-sections of the results and figures. The graphical abstract or thumbnails (small icons) was aimed to encourage browsing, promote interdisciplinary scholarship and help readers identify which papers are most relevant to their research interests. Emilie Marcus, Editor in Chief, Cell Press, commented,
Blogs and Wikis
215
The genesis of the ‘Article of the Future’ project came from a challenge to redesign from scratch how to most effectively structure and present the content of a traditional scientific article in an online environment. The rapid pace of technological advancements means this will undoubtedly be an evolving design, but we are happy to be able to address some key reader and author pain points such as the integration of supplemental data with these initial prototypes.
Successful results from this project would ultimately be rolled-out across Elsevier’s portfolio of over 2,000 journals. At a publishers’ conference held in Berlin in January 2014 (APE or Academic Publishing in Europe), Richard Padley, CEO of Semantico in the U.K. gave a wake-up call for ‘what the article of the future is really about.’ He suggested that if we were called on to design the ecosystem for scholarly communication today it would not look like it is now. He pointed out that print does have permanence, but such permanence is not necessarily what science needs. Some of the features which should be designed into the ‘article of the future’ include the ability to change the article and not to lock it into the past. It needs to be interactive and updateable. It also needs to be executable – so that it leads through links into other media. As researchers interact directly with each other’s data it becomes possible for them not just to publish science online but actually create new science online. Padley also thinks the article of the future should be reproducible. This means providing access to the raw data; not only access to the raw data but to the software to manipulate the data: This takes us a long way away from a world where recognition of academic results was more or less dependent on text …. contained in a document. However, for many publishers that mindset is proving hard to change (Padley, 2014).
A further extension of this is data mining – if this does take hold does it mean that in future anyone will want to actually read papers in the old fashioned way? Will there be ‘an article’ in the future? However, these isolated projects and viewpoints need to be set against the whole panoply of other processes which relate to and impact on published research results. Several of these have been embraced by a wider society, and open up the potential for UKWs to become more involved in scientific research in future.
Blogs and Wikis Blogs are a distinctive feature of communication in the Internet society. Some are very popular, and have attracted attention from a wide audience. They are
216 Chapter 17
Future Communication Trends
probably not going to transform the scientific world but do show what can happen when you remove communication barriers separating individuals from the rest of the community. Blogs are a powerful way to scale up informal scientific conversation and to explore speculative ideas. Although there is a fear that they provide the platform on which rumours and mistakes abound, to some extent these rumours are self-regulating. Blogs are increasingly important to academic journals. In a study undertaken by Professor Carolyn Hank from McGill University, Canada, covering 153 bloggers among scholars from different disciplines, there was optimism on both the importance of blogs for research and their power to affect the careers of authors. 80% of those surveyed believed that their blogs contributed to the overall scientific record. 68% felt that blogs should be subjected to the same degree of quality control as formal published literature. (Hank, 2012) Where problems arise is with the responsibility for curating and preserving blog activity. Whilst 85% felt it was the bloggers responsibility to archive, only 65% felt that they had the capability to do so. There is also a more fluid situation with blogs, with 98% claiming they amended their blogs after publication, and 29% deleted blogs for a variety of reasons (Hank, 2012). Where there is failure of wikis and other user controlled commentary sites, it is essentially down to conservatism of the audience or to a shift in the topic. The primary objective of researchers remains to produce highly rated articles/papers. Unlike the case with Amazon, reviews of others’ works will not be written openly. Wikis and contributed end user sites are ends in themselves and are not contributory to a scientific paper being created. Some fear that in challenging someone else’s contribution to a blog or wiki – openly – that this could come back to haunt them in future. Their own work may be subject to the same criticisms from those who have been attacked, or worse, those criticised in the blog may at some future stage determine whether the author gets future funding.
Tablets and Smartphones Handheld devices are now driving a new demand through free or low priced applications or apps. Technology consultancy company Gartner published a report in April 2012, which claimed that worldwide media tablet sales to end users was 118.9 million units in 2012. This is a 98% increase on the 2011 sales of 60 million units (Gartner, 2012). How many of these were acquired by the ‘long tail’ of knowledge workers? The assumption is that the appeal of this device would be among those who have an innovative bent irrespective of their institutional affiliation or personal interest.
Tablets and Smartphones
217
Gartner analysts claim that sales of tablets within corporations would be about 35% of total tablet sales in 2015. Gartner expects corporations to allow tablets to be part of their buy your own device (BYOD) programmes. More of these tablets will be owned by consumers who use them at work. Knowledge workers are perhaps closest to the social media services reached through tablets. Everyday use of the tablet for all online functions, including scientific access, by all knowledge workers irrespective of their affiliation could become an important driving force for delivery of scientific content in formats most suitable for tablets. It was also forecast that mobile Internet use would have overtaken desktop Internet use by 2014. Whether the use is Apple’s iPhone, RIM’s Blackberry or the Android-based phones, each of which held some 25% of the mobile phone market (Gartner, 2012) the need to deliver research information in a form suitable for the small screen on the phone has become essential. This is not so much the mobile phone, which has now been with us for 40 years, but rather the more recent Smartphone. The mobile display revolution may have a better chance of reaching the peripheral, non-institutional, particularly professional knowledge workers, than the formal carriers of books/journals even in a PC format. This puts a huge onus on the real needs of the new ‘long tail’ of STM users being understood. The American Chemical Society (ACS) has introduced a mobile software application for users of Apple’s iPhone and iPod Touch devices. The information delivery service, ACS Mobile, provides readers with an up-to-the-minute live stream of peer-reviewed research content published across the Society’s portfolio of scientific research journals, and is augmented by ‘Latest News’ from Chemical and Engineering News (C&EN), the Society’s magazine for its more than 161,000 professional members. According to ACS, “The majority of our web users are between the ages of 20–40, and we find that one-third of those readers now use mobile devices to access the Internet.” There are other publishers also experimenting with delivering their information on smart phones and other handheld devices. Internet use via mobile phone and tablet offers a different user experience: on the go, from virtually anywhere and at any time. According to CIBER, the mobile revolution will result in further disintermediation within scientific communication. More people have phones than computers. Research undertaken by CIBER on the EU Europeana project involved an analysis of usage logs of this cultural, multimedia website³ which started tackling the mobile challenge in 2011 (Nicholas, 2013; 2013a; 2013b). The information behaviour
3 Available at: http://www.europeana.eu/portal/
218 Chapter 17
Future Communication Trends
of 150,000 mobile users was examined in 2012 and compared with that for desktop users. The main findings were that mobile users are the fastest-growing user community – a growth rate five times faster than that achieved by PC and desktop users. Of the 4.5 million users of Europeana, 70% were referrals from other sites, 97% being from Google. 1% of the referrals came from social media. Mobile telephony is generating a ‘time shift’ in user behaviour. Their visits are very different from those arising from desktops. Mobile visits are information ‘lite’, typically shorter, less interactive, and with less content viewed per visit. Use takes on a social rather than office rhythm, with use peaking at nights and weekends. A great number of Europeana site visits occurred on Saturday nights for mobile users; for fixed devices such as PCs, it was Wednesday afternoon. The stimulus behind the growth of mobile telephony for scientific information and cultural purposes, according to the Europeana results, is that people trust their mobile phones. It therefore appears instead of information-seeking and reading taking place in the library and office, it will take place on the train, coffee shop, and around the kitchen table. The change of environment and context changes the nature of searching and reading, according to CIBER. “While the first transition, from the physical to digital, transformed the way we seek, read, trust, and consume information, until relatively recently the environment and conditions in which scholars conducted these activities had not really changed – it was still largely in the library or office, sometimes the home. However, with the second transition to the mobile environment, information behaviour is no longer mediated or conditioned by the office or library (and its rules and impositions), but by the street, coffee shop, home; in a nutshell by current social norms”. (Nicholas, 2013; 2013a; 2013b). Mobile phones are about consuming information content, not creating it. It fits in with the ‘fast food’ analogy which along with trends in neuropsychology (see earlier in chapter 14) means that consumers are becoming less engaged with the communication process. This is in contrast to suggestions made by pundits who are proposing greater interaction between information creation and the greater mass of individuals in future, driven by the culture of the Internet. With mobile devices, people can meet their information needs at the very time of need, rather than cold store their information need until they reach the office, library, or home. Logically this should mean that more needs are met, after all if you have to store them there is always the likelihood that they may be forgotten or overwritten by another need. So might we expect more visits and searches from mobile users overall? Also, mobiles are an intrinsic part of the digital consumer purchasing process – they are used to search for information prior to purchase, during the process itself, and to make the purchase. It is
Text and Data Mining
219
possible that scholars, who are also digital consumers, might be more likely to go down the pay-per-view route. As such this opens up a new opportunity for Unaffiliated Knowledge Workers. There is a mash-up to be achieved between the social/informal and the formal in terms of information formatting and delivery. There is also a charging mechanism which needs to be considered, one which is more linked to Apps than document delivery charges. In practice there already is a procedure for paying for services through mobile phones which is used by UKWs as much as any other sector of society.
Text and Data Mining An intellectual tool for researchers is the ability to make connections between a large collection of seemingly unrelated facts, and as a consequence create new ideas, approaches or hypotheses for their work which presently are buried deep in extensive collections. This can be achieved through a process known as text mining (or data mining if it focuses on non-bibliographic data). These relationships would be extremely difficult to discover using traditional manual-based search and browse techniques. Both text and data mining build on the corpus of past publications and build not so much on the shoulders of giants as on the breadth of past published knowledge and accumulated mass wisdom. The claim currently being made for text and data mining is that it will speed up the research process and capitalise on work which has been done in the past in a new and effective way. However, a number of features need to be in place before this can happen. These include: – easy access to a vast corpus of research information – available in a consistent and interoperable form – freely accessible, without prohibitive authentication controls – covering digitised text, data and other media sources – unprotected by copyright controls (over creation of derivative works) – a single point of entry supported by a powerful and generic search engine – a sophisticated software mechanism for enabling the machine (computer) to analyse the collection for hidden relationships. Currently the full potential for text/data mining is not being met because several of the above requirements are not being met. There are too many ‘silos’ of heavily protected document servers (such as those maintained by individual STM journal publishers) to provide the necessary critical mass of accessible
220 Chapter 17
Future Communication Trends
data. There is also little interoperability between the various protocols and access procedures. Google is an exception. It has recognised that it is in the data mining business, sitting as it does on a huge database of usage data from which it can extract new and emerging behaviour patterns. Nevertheless, text and data mining is still at an early stage in its development, but given the unrelated push towards an ‘Open Access’ environment (which undermines the ‘silo’ effect) text/ data mining may become significant as a research tool within the next two to five years. Who will run with this? In Europe, it is currently a biomedical project funded by a consortium of research councils and charities – Europe PMC⁴ – which uses the input from a number of publishers to enable text mining. However it is only against the objections from some leading STM publishers. The general thrust of all of these provisions is to recognise the huge potential that exists for combining different data sets – particularly the very large ones such as weather data, patient health records (suitably anonymised), FaceBook behaviours, data from the Large Hadron Collider, soil records, etc, etc. – in new and perhaps unexpected ways, to produce new discoveries. By being made public and accessible in this way, the data becomes not just part of the record of scientific enquiry, but also a tool from which new knowledge is generated. The issue of the barriers to text mining created by publishers has been outlined in a report published by Jisc in March 2012 (McDonald et al., 2012) Their key finding was that there is potential for significant productivity gains, with benefit both to the sector and to the wider economy from the adoption of text mining. The report claimed that if text mining produced just a 2% increase in productivity it would be worth an additional £123 million to £157 million in working time per year. Another report published by McKinsey Global Institute in 2014 said that “big data” technologies such as text and data mining had the potential to create €250bn (£200bn) of annual value for Europe’s economy if researchers were allowed to make full use of it. There was already some significant use of text mining in fields such as biomedical sciences and chemistry and some early adoption within the social sciences and humanities. Current U.K. copyright restrictions, however, mean that most text mining in UKFHE is based on Open Access documents or bespoke arrangements. This means that the availability of material for text mining is limited to at best 20% (currently) of available material, and only if these adopt Creative Commons BY (giving reference to the original works) access rights.
4 Europe PubMed Central, available at: http://europepmc.org/
Text and Data Mining
221
The report also analyses the potential costs of implementing text mining. These relate to access rights to text-minable materials, transaction costs (participation in text mining), entry (setting up text mining), staff and underlying infrastructure. “Currently, the most significant costs are transaction costs and entry costs. Given the sophisticated technical nature of text mining, entry costs will by and large remain high”. The report goes on to claim that the current high transaction costs are attributable to the need to negotiate a maze of licensing agreements covering collections researchers wish to study. Against the costs, the benefits which were identified include: increased researcher efficiency; unlocking hidden information and developing new knowledge; exploring new horizons; improved research and evidence base; and improving the research process and quality. Broader economic and societal benefits include cost savings and productivity gains, innovative new service development, new business models and new medical treatments. Nothing specifically was said about enticing new knowledge worker groups into demanding greater access to text mining services, but this is implicit in some of the listed benefits. The real challenge facing the adoption of text mining in the U.K. remains, however, the copyright issue, and this was reviewed by the Hargreaves (Hargreaves, 2011). This endorses the recommendation for an exception to text mining for non-commercial use. “Legal uncertainty, inaccessible information silos, lack of information and lack of a critical mass are barriers to text mining within UKFHE. While the latter two can be addressed through campaigns to inform and raise awareness, the former two are unlikely to be resolved without changes to the current licensing system and global adoption of interoperability standards” (Jisc, 2012). The Jisc report suggests that U.K. has a number of strengths that put it in a good position to be a key player in text mining development, including a strong framework for innovation and the natural advantage of native language. But there is evidence to suggest a degree of market failure in text mining at present. This can be addressed by a number of changes, which includes adopting new business models and relaxing copyright rules. Jisc recommended that the U.K. higher education sector work closely with publishers to overcome legal barriers. Dr Peter Murray Rust is a keen advocate of the role of text and data mining in scientific research which he defines as the use of machines to read and understand massive amounts of documents. Other agencies giving their support to the concept of text and data mining include RCUK, the Wellcome Trust, National Science Foundation and the European Commission. Another important feature of the current scientific publishing system which text and data mining would overcome is that – according to Peter Murray Rust – “we
222 Chapter 17
Future Communication Trends
repeat about 25% of our chemistry because we didn’t know we’d done it already.” (Murray Rust, 2014). Duplication of research activity is a weakness in the traditional publishing system which a digital approach could help overcome. Asking for permission from publishers to enter the academic walled garden is an option, though time-consuming. A representative from Elsevier, Dr Alicia Wise, claims that, in principle, her company was happy to allow mining its content. “We want to help researchers deepen their insight and understanding, we want to help them to advance science and healthcare and we want to be able to do that in ways that help realise the maximum benefit from the content we publish. Text mining is clearly a part of this landscape and it will continue to be and we’re keen to support it.” Elsevier says that it has now made it easy for scientists to extract facts and data computationally from its more than 11 million online research papers. Other publishers are likely to follow suit, lowering barriers to the computer-based research technique. But some scientists object that even as publishers roll out improved technical infrastructures and allow greater access, they are nonetheless still exerting tight legal controls over the way text-mining is done. Under the arrangements, announced in January 2014 at the American Library Association conference Philadelphia, researchers at academic institutions can use Elsevier’s online interface (API) to batch-download documents in computer-readable XML format. Elsevier has chosen to provisionally limit researchers to 10,000 articles per week. These can be freely mined – so long as the researchers, or their institutions, sign a legal agreement. The deal includes conditions: for instance, that researchers may publish the products of their text-mining work only under a licence that restricts use to non-commercial purposes, can include only snippets (of up to 200 characters) of the original text, and must include links to original content. Whilst this development does not impact significantly on the unaffiliated knowledge worker audience, it does show that changes are taking place in the way scientific information is being produced and used, and in this volatile situation the needs of the wider audience could also be addressed.
Online Communities According to Wikipedia, an online community is: a virtual community whose members interact with each other primarily via the Internet. Those who wish to be a part of an online community usually have to become a member via a specific site. An online community can also act as an information system where members
Online Communities
223
can post, comment on discussions, give advice or collaborate. Online communities have become a very popular way for people to interact, who have either known each other in real life or met online. The most common forms people communicate through are chat rooms, forums, e-mail lists or discussion boards. Most people rely on social networking sites to communicate with one another but there are many other examples of online communities.
Publishing Technology, a publishing-specific software solutions provider, and Bowker Market Research, released the findings of U.S. and U.K. research into the growth of academic and trade publishing online communities. The number of publisher-owned online communities is set to more than double within two years, according to their research (Publishing Technology, 2014). The study found that two-thirds of responding publishers currently host communities, and that this was set to rise to over 90% by 2015. A quarter expected to have seven or more networks up and running by 2015. The survey included publishers across trade and academic sectors and revealed that trade publishers are currently most engaged in this area with 86% of respondents owning an online community in some form. The study also investigated the rationale and perceived benefits for publishers moving into this arena, revealing that 84% of publishers felt their spending on online communities would increase in the next two years with only 14% envisaging expenditure remaining stagnant. 64% of publishers with online communities were convinced that their investment in this market was already paying off and a further 24% believed it would do so in the short term. There are indications of interest in online communities among scientific publishers. In both the publishing and library worlds there has been increasing discussion and some experimentation about how publishers could present themselves as mediators of scholarly interaction/discourse in a fully digital environment. It is the convention, however,for publishers to conceal much of what they are actually doing in this area. Nevertheless it is possible that publishers’ strong positions in professional niches might provide them with the opportunity to exploit social media because experience in related professional areas such as medicine (see bmj.online) has suggested that interactions among ‘practitioners’ using research literature are quite specific. The British Medical Journal (BMJ) appears to offer something unique – see http://doc2doc.bmj.com/blogslist.html. The obvious conduit for such interactions using social media would be professional or scholarly societies but investigations of associations have shown much less in the way of, for example, mediated blogs than one might have expected (bmj notwithstanding). The same is true for any community functions which employ social media.
224 Chapter 17
Future Communication Trends
On the whole portals or communities whether organised by publishers or intermediaries do not seem to have taken advantage of the revolution in social media opportunities. Nature is probably the publisher best known for its efforts in the online community area and are known to have put in many years of investment into how to use social media for community building. Most publisher-generated blogs take the approach of informing the community what they specifically are doing for them, rather than generating interaction and community collaboration. PLOS blogs, for example, are dominated by open science rhetoric and not concerned with the specific scholarly interests of those who publish with them. Wiley and Elsevier are known to be experimenting with community building often with groups of journals but it is difficult to find anything concrete on these experiments. The correct formula has yet to be discovered and shared.
Workflow Processes Workflow processes are also fashionable. Workflows are networks of analytical steps that may involve database access and querying, data analysis, text and data mining, and many other steps including computationally intensive tasks on high performance cluster computers. As more digital resources are created, exposed as web services, workflow projects provide an attractive means for collecting relevant information to meet specific needs. Workflow processes take on a ‘soup to nuts’ approach in enabling the full life cycle of research to be accommodated as part of a holistic whole. It does not distinguish what is research from the communication of that research. The process for manuscript creation in the research life cycle can have the following elements to it: – Idea for a research topic – Funding sources identified – Collaborators found – Initial exploration undertaken (on competitive works) – Patents and standards investigated – Extensive literature research undertaken – Research activity – Exploitation potential investigated – Results written up – Results shared with peer group – Publishable manuscript prepared
Workflow Processes
– – – – –
225
Sent to publisher Refereed Corrections and changes made Edited and DOIs (digital object identifiers) applied Printed and Included in an accessible file server.
It is only the last few functions in which the STM publisher has been active, though there are systems being developed which help researchers from the first to the last stage in the above. However, it is something which does not fit readily into the current STM publishing portfolio of products and services which are highly discrete in the functions they address, and aim solely at commercialising the end, the nuts, of the research process. In a separate study undertaken by CIBER on behalf of RIN and OCLC, those research support services which exist in U.K. (and U.S.A.) research-focused universities were investigated. The research life cycle was taken as the model, from inception of idea, through initial investigation of what is available, through seeking funds, collaborators, doing the research, and finally writing and having the results published. The study felt that information-based research support services focus on the initial stages of the research cycle (e.g. identifying grant opportunities and funding), and on the later stages (e.g. knowledge transfer and realising commercial potential). In between, where the intensive research effort takes place, there is much less evidence of research support being provided by the institution. When questioned about the most significant barrier which faces them in their research, scholars said it was the need to comply with the various burdensome regulations, be they ethical clearance, standards, meeting budget requirements, financial reporting processes, etc. Researchers said they were there to do research, not to do all the many ancillary things that are attached to compliance (CIBER, 2010). According to the Wikipedia definition, workflow processes bring together a number of technical strands. But more important they take account a cultural and behavioural swing with respect to the way scientific communication is being conducted. Instead of the traditional approach of publishers delivering information on a take-it-or-leave-it basis, workflow processes require awareness of the end users’ needs to be the starting point for engineering content to meet a clearly specified demand. ‘User centricity’ is the by-word, which means that if it is to become a serious component in the future for STM publishers it requires a new form of ‘publishing’ and understanding of end users to occur. There is no guarantee that the existing key STM publishing players can make that transition across the ‘valley of death’ from an ‘author-centric’ (supply driven) approach to one in which user demand (‘user centric’) becomes the key driver. It may well
226 Chapter 17
Future Communication Trends
mean that the very structure of traditional STM publishing is ill-suited to the needs of a workflow process approach. New services are emerging using social networking. These are both collaborative and transparent – two features of the Internet and social media development. They are not obtrusive. It is not scientific publishing in the traditional sense. New players may take the lead and traditional publishers may resort to licensing their content to these new players in future. Organisations such as Collexis, Sermo, SAP, Oracle and the many sector specialist software houses may feel there is an opportunity for them to become leaders in the drive towards creating more efficient information delivery systems based on the creation of new workflow practices. Already there are instances of workflow developed by the National Institutes of Health (BIRN), by the National Science Foundation (GriPhyN, GEON, SCEC, SEEK), US Department of Energy (Sci-DAC/SDM, GTL) and the UK e-Science (MyGrid, DiscoveryNet). One possibility is to tackle the complexities of digital scientific communication by developing new ways of capturing, organising and measuring scattered inputs – text, data, video, software, computational programmes – so that they end up becoming part of a coherent contribution to science. Perhaps the most successful experiment of this type can be found at websites such as Faculty of 1000 (F1000) and thirdreviewer.com, and in online reference libraries such as Mendeley, CiteULike and Zotero, which allow users to bookmark and share links to online papers or other interesting sites. Google also announced (February 17, 2011) that they are upgrading Social Search, which takes into account social media to improve and personalise search results. Workflow processes potentially heralds a new era of offering a more digitally-aware approach in future. It suggests a closer link between the workflow process developer and the end user, by-passing intermediaries such as libraries and publishers with their specific and more limited agendas. This would add significantly to the disintermediation process which has been ongoing within STM during the past few decades. In particular, it challenges STM publishers to adjust to the situation whereby they need to formalise contact with the end user and be a supplier to new workflow processes. Being locked out of the primary point of contact with end users raises the risk of both publishers and libraries being strategically vulnerable in future.
Data and Datasets The transformation of scientific research information from a text to a data focus has been revolutionary in the past decade or so, as has been the revolution
Data and Datasets
227
in demographics and social networking. Recent years have shown an exponential growth in the volume of available research data. Most stakeholders across the spectrum of researchers, funders, librarians and publishers agree on the benefits of having researcher validated data available and findable for reuse by others. Nevertheless, current estimates about research data are, anecdotally, that approximately 70% are never shared and remain on personal computers or hard disks of the researchers or at best in their department at the institute. In part this is lack of awareness – it is not always evident when and where a dataset has been compiled which would be useful to someone else working in a related field. In order put some structure to the dataset issue, an organisation called DataCite⁵ has been created to enable agencies within individual countries to seek out and apply digital object identifiers (DOIs) to the datasets within their domain. DataCite has a Managing Agent (currently the German National Library of Science and Technology, TIB, in Hannover) along with 15 regional members in eleven countries. However, some parts of science have moved forward more rapidly – into ‘Big Science’ in particular. It has become much more collaborative, open and transparent in its operations. Large data sets of scientific research results are being maintained by global centres such as CERN, for particle physics, and the National Center for Bioinformatics Information (NCBI) for genetics and bioinformatics. In many other scientific disciplines large datasets represent the source for, and the target of, latest research results. The publication of an article or a book about research results follows many months or years later, and fulfils a different role. Communication is increasingly occurring through the data centres, using tools and procedures more akin to the social networks, and is immediate and instantaneous and often free. As one example of cross-stakeholder participation in datasets, University College London (UCL) and Elsevier have announced they will establish the UCL Big Data Institute (Elsevier, 2014). This is a new collaboration to explore innovative ways to better serve needs of researchers through the exploration of new technologies and analytics as applied to scholarly content and data. Elsevier believes linking analytics and scientific content is one of the key enablers to better serve scientists. The company will fund research and studentships through the new Institute, offering opportunities to research the analysis, use and storage of big data. Elsevier will establish a Centre of Excellence within their web analytics group in connection with the Institute, co-staffed with UCL researchers. UCL plans
5 Available at: https://www.datacite.org/
228 Chapter 17
Future Communication Trends
to have a connected community, from particle physics to digital humanities, sharing insights into better use of large computational resources in research. UCL’s Big Data Institute will be a key addition to this family of activities. (See Elsevier, 2014). To the extent that scientific publishing is about helping research to get done, publishers need to engage with the problems researchers are having in this area if they want to be part of the solution. The most obvious driver behind the growth of Big Data is the increasing demands from funders that research data be published online along with (or alongside) the text of journal articles. In a policy memorandum released in February 2013, the U.S. White House’s Office of Science and Technology Policy signalled to all federal agencies with more than $100M in R&D expenditures that they were going to have to ‘better account for and manage the digital data resulting from federally funded scientific research’. Given the central importance of OSTP in government-funded research in many areas, this made many people sit up and take notice. It resulted in a mandate which requires each of these agencies to make their research outputs, both text and data, ‘open’. In the U.K, where government support for Open Access has been even more wholehearted, the body that controls funding for HE research, RCUK, has a policy commitment to effect ‘transparency and to a coherent approach across the research base’ when it comes to making research data available to users. ‘Publicly funded research data is a public good, produced in the public interest, which should be made openly available with as few restrictions as possible in a timely and responsible manner.’ (The Wellcome Trust’s policy on data management and sharing echoes some of these themes, adding the rationale that this ‘ensures that these data can be verified, built upon and used to advance knowledge and its application to generate improvements in health’). The point of quoting these extracts is to show that it’s not only about whether data gets published, for funders, but also how it gets done. It has to be discoverable, and presented in a way that makes it easily usable by third parties. Some also make stipulations about metadata that has to be provided. A requirement is to get the same sort of stability in the publishing process for data that exists for text using DOIs. To aid discoverability, accessibility and usability it will require an effort analogous to that which publishers have already gone to in making the text of journal articles available online. Having been through this once, publishers have an opportunity now to ramp up the services they provide on data publishing. The question is whether they will grasp this nettle, and how they will generate a viable income stream.
Gartner’s Hype Life Cycle
229
Gartner’s Hype Life Cycle The Gartner ‘Hype’ Cycle demonstrates that the many different Web products are at different stages of development, and all go through a period of hype and disillusionment before settling down to an even development programme. Not all Web 2.0 products have reached such equilibrium. Many fail for reasons which were not apparent to the innovator.
VISIBILITY Peak of Inflated Expectations
Plateau of Productivity
Slope of Enlightenment
Trough of Disillusionment Technology Trigger
TIME
Fig. 17.1: The Gartner Hype Cycle
As a concept the Hype Life Cycle model helps organisations understand that the path to market acceptance is not always smooth and that disappointing results along the way may be a reflection of the uneven nature of product life cycles. Specifically, Fig. 17.1 could illustrate the various stages which scientific information services might sit on the Hype model. However, it is a dynamic model as well as subjective in the assessment of where individual products could be plotted. For example, although institutional repositories (IRs) could be given as being at an early stage – at ‘technology trigger’ – in recent years there has been much commitment by funding agencies in particular to move them up the slope and beyond the ‘peak of inflated expectations’. Web 2.0 may be entering the ‘trough of disillusionment’ according to some pundits, whereas digital rights management (DRM) may be on the ‘slope of enlightenment’ or even reached the ‘plateau of productivity.’ Social networking and social publishing for the science community may still be reaching for the ‘peak of inflated expectations’ and therefore still has some way to go before they become productive tools.
230 Chapter 17
Future Communication Trends
The importance of this concept is that it emphasises that there is no smooth path from ‘technology trigger’ to new information service adoption, and that good new ideas may fail just as often as they succeed. It also suggests that there is a ‘right time’ and ‘right conditions’ for new services to be introduced successfully (the ‘tipping point’). Unaffiliated Knowledge Workers are bystanders in seeing how new products/services succeed or fail along the hype cycle.
Scientific Publishing and Innovative Trends Following on from the above, it would be wrong to assume that the publishers and librarians are sitting still, not reacting to any of the forces which make up the ‘perfect storm’. Whilst there is much innovation being introduced by new players within the information business there are also some interesting applications being introduced by the existing players. For example, researchers themselves are building their own tools and services. Founded by two neuroscientists, Henry and Kamila Markram from the Swiss Federal Institute of Technology (EPFL) in 2007, Frontiers⁶ was formed out of the collective frustration and desire to empower researchers to change the way science is created, evaluated and communicated, through open access publication and open science tools. The value of the service they have created has been recognised by one of the more forward-looking commercial publishers – Macmillan/Nature Publishing Group – to the extent that it was acquired by Nature in late 2013. Similarly, Figshare⁷ born out of the frustration of founder Mark Hahnel, who believed academics were not getting the credit they deserved for research. He set up the company, part of Digital Science (also part of Nature PG) to allow researchers to publish all their data in a citable and shareable manner. Dave Copps created Brainspace⁸ which tackled frustrations facing scientists and researchers given the abundance of information on the web through providing a self-organising platform. Mendeley, now owned by Elsevier, was similarly a startup from several German and U.K.-based academics to share information and sources among kindred researchers. As reported by Peter Murray-Rust, the 250,000 people who have helped create Open Street Map and get it accepted as being among the highest quality
6 Available at: http://www.frontiersin.org/ 7 Available at: http://figshare.com/ 8 Available at: http://www.brainspace.com/
Scientific Publishing and Innovative Trends
231
and most useful cartographic product – the contributors did not come from old-school cartographers. “They came from all walks of life, including cyclists and ramblers. Wikipedia didn’t come from converted academics, it came from people outside academia and encyclopedias. Academia (with a very few exceptions) howled it down and it has succeeded in spite of this”. (Dr Peter MurrayRust, GOAL, August 2012) Publishers seem unable to take advantage of the many new opportunities presented by digital media because among the properties of digital content is the ability to easily make copies, including unauthorised copies. Publishers find themselves fighting a rearguard action, working with policy groups and lawyers to prevent piracy. One way out of this situation is to take advantage of other properties of digital media and make content increasingly dynamic and interactive and hence uncopyable. This is the avant-garde of publishing: using the capabilities of digital media to get beyond fixed texts and into texts that constantly evolve. In a report delivered to members of the International Council of Scientific Information (ICSTI) in Beijing, China, in June 2011, a representative from Elsevier gave the following summary of the main areas which publishers were looking into in order to keep pace with emerging market demand (ICSTI, 2011): – Being aware of concerns about OA implementation – and creating processes so that STM publishers survive – Creating new forms of scientific article dissemination (see above) – Developing tools and workflow processes – Launching SciVal (Elsevier) and similar value-added and premium products – Buying software companies for vertical and horizontal expansion – Outsourcing/offshoring production – Developing business opportunities in Asia, notably in China and India – Delivering content on handheld devices (smart phones and tablets such as iPads). All these developments could act to the advantage of knowledge workers in general but the question remains whether the largely small scale STM publishing industry, and the disparate research library community, are sufficiently motivated and structured to run with the various options available. In Neilsen’ book (Neilsen, 2011) he claimed that the future profile of scientific publishers would be heavily technology weighted, and with a less editorial, financial or administrative focus. The jury is still out on survival prospects for all parts of the STM publishing chain. It may be that the lead will be provided by new organisations, not constrained by the heritage of corporate missions set at a time when print was the
232 Chapter 17
Future Communication Trends
dominant form of information dissemination, to create new appropriate and viable business paradigms for research results. We can but wonder at the rapidity which services such as Google, FaceBook, Twitter, LinkedIn and the many moderated bulletin boards which have emerged to enable the rapid transfer of the latest research ideas and results. From a standing start less than two decades ago, the number of Internet users worldwide now exceeds 3.0 billion (out of an estimated global population in July 2012 of 7.2 billion). FaceBook alone has claimed a membership base of 1.44 billion (June 2015) – almost one in eight of the world’s population are allegedly FaceBook members (but not necessarily active members), 60% of whom use mobile devices to access the service. Twitter has over 300 million users; LinkedIn 360 million; Flickr has 51 million, Instagram has 300 million users, Google+ has 400 million, whilst even one of the newer entrants to the social bookmarking scene such as Mendeley has amassed a user base of 2 million. Social media is rapidly coming of age, and this has powerful implications on the research process. Have these many new informal communication channels made much of an impact on formal scientific communication? So far there seems little evidence of much of a breakthrough. As Martin Weller writes in his book “The Digital Scholar”: These emerging themes [crowd sourcing, light connections, online networks] sit less comfortably alongside existing practices and can be seen as a more radical shift in research practice. A combination of the two is undoubtedly the best way to proceed, but the danger exists of a schism developing between those who embrace the new approaches and those who reject them, with a resultant entrenchment to extremes on both sides. This can be avoided in part by the acknowledgement and reward of a new form of scholarship. (Weller, 2011)
The schism is in part a reflection of the separate behaviour patterns of a researcher as User and Author. This was made evident during the interviews conducted in person and by phone with U.K. researchers in recent years. The ‘dinosaur’ sector, which focuses on personal contacts as the prime source for scientific updates, accounted for half the respondents, compared with the other half who were selfconfessed ‘grubbers’ among all that social media had to offer. The latter had embraced the digital communication possibilities. Science communication has to adapt to a world where Google dominates the search space, and Amazon makes online purchasing easy; where eBay and Paypal set the parameters for selling and buying individual items; where Skype and Viber make connections cheap and easy; where FaceBook, Twitter and LinkedIn open awareness of personal and professional activities. With the new
How the Research Process is Changing
233
paradigms being explored as society moves towards openness, and as technology platforms are being created to satisfy emerging social trends for rapid communication, there is an opportunity to re-write the scientific communication manual and not be constrained by past practices and the survival needs of current stakeholders. The needs of a wider community – including Unaffiliated Knowledge Workers – should not be forgotten in this transformation.
How the Research Process is Changing Michael Neilsen in his book on ‘Collective Intelligence’ (Neilsen, 2011) gives a number of examples of how researchers in science now undertake background research activities, and how different this process is compared with the past. In his examples, Neilsen points out that in many aspects of scientific endeavour cross disciplinary expertise is required. In the past it was fortuitous if a researcher stumbled across the right person who could provide answers to something which, once resolved, opened up the project to further development. These stumbling blocks can now be described and circulated through social media enabling a global audience to be reached and increasing the chances of the right expert being found. This is a different sort of networking – no longer of known associates but an open network in the world of science. Openness and collaboration/sharing are becoming powerful features of how scientific research is being conducted in the twenty-first century, and the mechanisms which enable this to happen are social media related, and not the traditional publication system. It is the scientist him/herself who initiates and pursues the development of the open networks; there are no workflow practices which the publisher puts in place to expedite this. As a result a major part of new information exchanges are digital and informal, bypassing publishers and librarians. Whilst publishers focus on the size and citation impacts of their primary research journal programmes, they risk being sidelined by a new evolutionary process, driven by the researchers themselves, which provide the communication services which underpin new research activities. And this enables greater involvement and participation by those who were caught outside the barriers of the scientific communication system because they were not affiliated with the subscription-buying institutions. The social networking involved in collaboration and designed serendipity breaks down those traditional barriers and enables something much more powerful to emerge into the science research process. The Net Generation is more alive to interactive communication – it has been wired into their system as a result of years spent playing online games. There is
234 Chapter 17
Future Communication Trends
therefore – in theory – a gap which is not being filled by the existing mechanisms for publishing, nor by established publishers. This gap has been filled in other sectors of society through using the available social networking tools. So far this has not happened in scientific publishing to any noticeable extent.
The Nautilus Concept An industry commentator based in the USA – Joseph Esposito – has applied the Nautilus concept to the drift outwards in information need (Esposito, 2007). In so doing he suggests that there is a different publishing model required to reach the knowledge workers further out from the source of content creation. From the requirement for rapid access to primary research findings, the core of the Nautilus (serving the leading academic researchers), the nature of demand may change towards secondary (abstracts and metadata) and tertiary (review, reference) material the further out one goes from the centre. The implication is that there is a new opportunity – a gap – which could open up to satisfy the demands of the ‘long tail’. This gap is more tertiary, informed and educational in nature than high level primary scientific reporting for academic researchers as in research journals. Even though technical papers are mainly written for other technical experts, there are often intriguing nuggets that should be made accessible to the layman. According to Esposito it is possible to envisage scientific communications as a spiral (see Fig. 17.2); the inner spiral represents the researcher’s closest colleagues; the next spiral outwards is for people in the field but not working exactly on the topic of interest to the author; one more spiral and there is the broader discipline; beyond that are adjacent disciplines; until one moves to scientists in general, highly educated laypersons, university administrators, government policymakers, investors, and ultimately to the outer spirals, where there is the general public and consumer media. Something may be lost in translation as research information moves outward from the core research to the disciplines beyond. Without accuracy in ‘translations,’ the loss would be great, as many readers would not be able to determine the intrinsic value within different disciplines and errors in interpretation could arise. This point has been alluded to in the earlier discussion of science and the media. At each spiral away from the centre, the role of the publisher grows. Researchers not familiar with the author will seek a way to evaluate his or her work, and a publisher’s brand is a form of insurance. “Formal publishing, in other words, assists an author not in speaking with a tiny group of peers but to a broader audience beyond them” (Esposito, 2007).
Engaging with the Wider Community
235
Consumer media, whose task is to inform the general public Scientists in general, other highly educated individuals, university administrators, government policy-makers, investors Those in a broader discipline (e.g., biochemistry); beyond that are adjacent disciplines (e.g., organic chemistry) People in the field but not working exactly on the topic of interest to the author A researcher’s (and author’s) intimate colleagues
GRAPHIX BY SEAN LINKLETTER
Source: Esposito, 2007 Fig. 17.2: Nautilus model of scientific communications
Esposito also points out that new, user-generated sites have emerged which link to the latest research in open access sources. These include news/journalistic sites such as Digg and Slashdot.com. It reflects a widening of the sources of information available to end users on all parts of the nautilus rings.
Engaging with the Wider Community There is no one approach which governs the way the scientific community acts. “There is no dictator determining the patterns of behaviour that make up the scientific community. But out of the actions and relationships of millions of individuals certain regularities emerge. Once those habits arise then future individuals adopt them unconsciously” (Brooks, 2012). There is a powerful conservative element which emergent systems need to overcome. This conservatism can only be overthrown if better ways come along to displace the legacy. These new and better ways must be seen as significant improvements to replace ingrained habits within the target audience.
236 Chapter 17
Future Communication Trends
Publishing a lay summary alongside every research article could be the answer to sharing a wider understanding of health-related information, say the findings of citizen science project in the U.K. entitled Patients Participate! Commissioned by Jisc and carried out by the Association of Medical Research Charities, the British Library, and UKOLN, Patients Participate! asked patients, the public, medical research charities and the research community, ‘How can we work together in making sense of scientific literature, to truly open up research findings for everyone who is interested?’ The answer came from patients who claimed that they want easy-to-understand, evidence-based information relating to biomedical and health research. Every day people are bombarded by health news, advice columns, medical websites and health products and making sense of all this information can be difficult. Tracey Brown, Director of Sense about Science says, “We have been working with scientists and the public for some years to challenge misinformation, whether about the age of the earth, the causes of cancer, wifi radiation or homeopathy for malaria. It is often very effective but no sooner is attention turned elsewhere than misleading claims creep back up again. To make a permanent difference, we need the public to be evidence hunters. We are delighted to encourage patients to engage with the evidence for medical claims.” Alastair Dunning, digitisation programme manager at Jisc, has also pointed out that, “Jisc believes that publicly-funded research should be made available for everyone and be easy to find. We have funded work to show how making access to scientific literature enables citizen-patients to participate in the research process, therefore providing mutual understanding and better links between scientists, medics, patients and the general public.” Engaging with the wider community is increasingly important for researchers. Some universities – such as the University of Warwick – offer researchertraining in communicating with lay audiences. Medical research charities also have an important role in providing patients and the public with information about the research they fund. All this helps the transfer of information and ideas from the forefront of scientific endeavour to the mass audience.
Summary Against the background of the above requirements, a number of new services and support processes are needed, which libraries, media and computer centres, publishers and commercial IT providers have to implement and integrate in their service portfolios. The new services include interface and web design,
Summary
237
retro-digitisation, processing and research data curation, maintenance and networking of repositories, hosting and storage of applications and data. Consequently, new business and exploitation models have to be established in order to improve scholarly communication. These service scenario need to be integrated into the publication processes and should exploit the potential of digital media. It should be clear that these efforts are not available for free. The accounting of services and the licensing of generated or hosted materials, wherever it might be necessary and useful, will play an important role. It is apparent that there are many new ways to ‘communicate’ the results of research, and the role of the traditional books/journal media as the established and main way of doing this are challenged by the plethora of new options. Many of them are based on technological advances in the areas of digitisation, networking and distribution. But equally, many of the options arise as a result in a change in the mentality of the emerging generation of scientific users. The social demands for openness, sharing, interactivity, transparency – these give a different set of criteria which supports new options for research dissemination. In facilitating such new ways for dissemination a different market structure for information services can emerge – one which provides more effective payback to the community for the funds which it has committed to the research process. And if this means that the gates of research results are opened wide, enabling knowledge workers across all areas and disciplines to participate in the process as the new creators and users, then so be it. This is not an argument specifically in support of Open Access in its various forms. There can still be commercial activities provided for premium services, but there is equally a need to be less inclusive and elitist about the results of society-funded research. Open Access, nevertheless, does represent a serious stepping stone in favour of a world more tolerant of wider use of the output of scientific research. Technical advances, in disrupting the established order for scientific communication and offering new powerful alternatives to the print-paradigm based products, can assist through bringing UKWs into the scientific publishing system as both readers and authors, particularly as and when technology addresses the needs of the wider community and its requirement to be better informed. Several studies promote the idea that synopses and abstracts of research results, written in a way which would be understood by the wider audience, could be introduced (see Esposito, 2007 and Allington, 2013). However, much depends on the profile of the unaffiliated knowledge workers – the main UKW groups will be looked at in the next chapters, starting with looking for unaffiliated knowledge workers within academia itself.
Chapter 18 Academic Knowledge Workers Introduction This chapter looks at trends impacting knowledge workers (or researchers) in academia. These individuals are beneficiaries of the current toll-based journal publishing system. Their information needs are met by the collection development budget of the research library. They merely have to ask and, if the funds are available in the library budget, their journal requests could be met. They personally suffer no financial penalty from requesting published research outputs. However, academics are nevertheless subject to access barriers – they may face restrictions to getting information they might need. This has been the subject of a CIBER report commissioned by RIN entitled ‘Gaps and Barriers’ which looked at the problems facing access of electronic articles in the U.K. (Rowlands, 2011). It is therefore also the case that they – the academics in universities – could be beneficiaries of a change in the way scientific literature could be disseminated and accessed in future. Much depends on how robust the university is in meeting the environmental challenges which the concept of the university confronts in satisfying society’s needs for both higher education and research.
Mission of Universities Universities face a challenging future in five main areas: – Competition. Competition for students is growing as students and governments face up to austerity. The issue of increased student fees is a troublesome area. – Digital technology. New electronic systems are being introduced to offer online education which changes the physical boundaries for academic teaching. Teaching longer needs to be done in a centralised physical space. – Globalisation. Both teaching and research are becoming more international. It increases cross border collaboration and information exchange. It also supports the emergence of global ‘centres of excellence’ which are able to trade on their brands and image. – Democratisation. The amount of knowledge available online, and the ease of adding to it, changes the elitist nature of current universities, and makes information more open to all.
Mission of Universities
–
239
Industry. Increasing partnerships between universities and industry are being fostered to exploit research output from universities.
These changes have raised questions about the viability of many universities which rely on revenues from traditional teaching and research services. Some smaller universities could face closure, whereas the scope for other new entrants particularly in the online education sector to take over some of the (digital) teaching roles is growing. In a study of Australian universities, consultants identified three possible courses of action (Ernst & Young, 2012). The first is for the university to maintain a ‘streamlined status quo’ by which universities would improve their interaction with the community at large. The second was to become ‘niche dominators’, targeting a particular customer segment with tailored education and research services. The third is for universities to become ‘transformers’ enabling private initiatives to work with universities to carve out new roles for themselves. Ernst and Young’s consultants claimed that “universities should critically assess the viability of their institution’s current business model, develop a vision of what a future model might look like, and develop a broad transition plan.” E-education is likely to be affected as traditional face-to-face teaching methods are replaced with new digital e-learning systems (Massive Open Online Courses, or MOOCs). The scope for expanding the range of an ‘educated’ society is improved with these new digital services particularly if they are built onto the traditional assets of a university – brand, culture, image, reputation. Such changes will also have an effect on research output. One change is greater interaction between the university and other private and public third party organisations. The elitism of current universities will decline in favour of greater partnerships and collaborations with non-academic centres. Spin offs from universities, such as Ingenta emerging from the ashes of BIDS (Bath University Information and Data Services), and CIBER Research Ltd from University College London are examples. Greater democracy and openness is to the advantage of knowledge workers who could become participants in this more extensive collaboration. However, there is a countervailing force which suggests that research will become increasingly concentrated in those universities that can demonstrate excellence and impact. This goes against the idea of there being a broadening of the research process within academia. Increasingly universities are under pressure to justify expenses and articulate the value of the education and research services they provide. We have seen the rise of productivity and impact measurements, such as the h-index, and
240 Chapter 18
Academic Knowledge Workers
industry growth in companies such as Academic Analytics that enable universities to benchmark against their peers, identify strengths and weaknesses, monitor performance, and allocate resources. There has also been the rise of university dashboards or control panel listing all services on offer from the university including recruitment, admission and graduation rates, time to degree, academic performance, financial support, counselling, student to faculty ratios, etc. As imperfect as these measures might be they reflect an increasing reliance on quantification of service impact using business intelligence techniques. What are these academic indicators suggesting about publisher and library involvement? And how will this play out for widening the research participation net to include UKWs?
Academic Researchers The estimated number of academic researchers worldwide grew from about 4 million in 1995 to approximately 5.8 million in 2002 and 7.2 million in 2007, rising more rapidly in developing than in industrialised countries. In the latest available Unesco Science Report (for 2010, the next edition is not due for publication until November 2015), countries with a smaller scientific capacity are finding that they can acquire, adopt and sometimes transform existing technology and thereby ‘leapfrog’ over costly investments in technical infrastructure. Technological progress is allowing these countries to produce more knowledge and participate more actively than before in international networks and research partnerships. This trend is fostering a democratisation of science worldwide. Tab. 18.1 gives estimates of the numbers of researchers as included in Unesco’s Science Report for 2010. Tab. .: World researchers in and (estimates) Region of Country
World Total Regions Developed countries Developing countries Less developed countries Countries United Kingdom United States
Researchers in (in ‘s)
Researchers in (in ‘s)
,.
,.
,. ,. .
,. ,. .
. ,.
. ,.
Academic Researchers
241
Tab. 18.1: (continued) Region of Country
Researchers in (in ‘s)
Researchers in (in ‘s)
. . .
,. . .
China Germany France Source: UNESCO Institute for Statistics, 2010
Researchers have been defined (NSF, 2014) as workers engaged in the creation and development of new knowledge, products, and processes. The above figures give the UK 3.4% (for 2002) and 3.5% (for 2007) as its share of the world total for researchers (full time equivalent). Fairly static, particularly in view of the growth for China which showed its world share growing from 13.9% to 19.7% between the two years. The United States still remains dominant (just) with 23% of researchers in 2002 and 20.0% in 2007. The UK share of publications is above its human resource share of research; in 2002 the UK it was 8.3% of world total, but by 2007 this share of publication had fallen (largely in response to the awakening of the Far East economies and their commitment to research) to 7.2%. In terms of R&D investment per researcher the UK falls behind the United States, Korea, France, Germany and South Africa (UNESCO Science Report 2010, p 12). These ratios will have changed as a result of the sudden halt by the global economic recession triggered by the sub-prime mortgage crisis in the U.S.A. in the third quarter of 2008. The effects of this on the estimated numbers of researchers according to the UNESCO data will only become apparent with the release of the 2015 World Science Report in November 2015. Meanwhile, the National Science Board publishes Science and Engineering Indicators for 2014. This includes a global overview, not just U.S.A. data, which can be summarised as follows. Worldwide, the number of workers engaged in research has been growing most rapidly since the mid-1990s in China and South Korea. The United States and the European Union experienced steady growth but at a lower rate. Japan and Russia were exceptions to the worldwide trend. Between 1995 and 2011, the number of researchers in Japan remained largely unchanged, and in Russia the number declined. However there is an implicit questionmark about the definition of a ‘researcher’ in the NSF data (and by extension, the data from UNESCO). In the U.S.A, the science and engineering workforce could be defined in several ways: by workers in S&E occupations, by holders of S&E degrees, and by the
242 Chapter 18
Academic Knowledge Workers
use of S&E technical expertise on the job. The estimated size of the S&E workforce in the USA varied depending on the criteria chosen. In 2010, estimates ranged from approximately 5 million to more than 19 million depending on the definition used. In the meantime, various pundits are uncertain about the precise number of researchers worldwide. Arthur Sale (University of Tasmania in Australia) posted the following comment on the GOAL List serv: In 2011 it [Australia] had 35 universities and 29,226 academic staff with a PhD. Let me assume that this is the number of research active staff. The average per institution is 835, and this spans big universities down to small ones. Australia produces according to the OECD 2.5% of the world’s research, so let’s estimate the number of active researchers in the world (taking Australia as ‘typical’ of researchers) as 29226/0.025 = 1,169,040 researchers in universities. Note that I have not counted non-university research organizations (they’ll make a small difference) nor PhD students (there is usually a supervisor listed in the author list of any publication they produce). Let’s take another tack. I have read the number of 10,000 research universities in the world bandied about. Let’s regard ‘research university’ as equal to ‘PhD-granting university’. If each of them has 1,000 research active staff on average, then that implies 10000 x 1000 = 10,000,000 researchers. That narrows the estimate, rough as it is, to 1.1M > no of researchers < 10M. I can live with this, as it is only one power of ten (order of magnitude) between the two bounds. The upper limit is around 0.2% of the world’s population. Can we do better than these estimates, in the face of different national styles? It is even difficult to get one number for PhD granting universities in the US, and as for India and China…!
According to Richard Price, founder of Academia.edu, he estimated in November 2011 that the number of faculty members worldwide was 17 million (number of graduate students was 10.8 million and the number of faculty members was 6.16 million). His figures are approximations based on figures from the United States and grossed up to provide global estimates. According to a NSF report (National Science Foundation, 2008), there were 600,000 graduate students in the U.S.A. in science and engineering fields. As Science and Engineering represents 22% of the US total, this gives the national graduate student figure for all academics as 2.7 million. Given that the US represents 25% of the world total, approximately, Price reached his figure of 10.8 million graduate students worldwide. By the same token, the U.S. Bureau of Labor Statistics indicated 1.7 million teaching personnel at US universities. Excluding assistants, this brings the number down to 1.54 million, and again multiplying this by four to give the global figure, there are then 6.16 million faculty members according to Price. Price also estimated the size of the science and engineering workforce. Based again on NSF data there are 12.9 million people in the U.S. that say that their job requires a science or engineering degree. Again, grossing this figure up
UK Academics
243
to get a global estimate, there are according to Price 51.6 million scientists and engineers worldwide. Of this figure, 6.8 million are teaching personnel at universities, and 44.8 million are in the private sector. Which assumes that the ‘core’ of academics are 12% of the total, and the ‘tail’ is 88%. The data in Fig. 18.2 gives annual growth rate in researchers.
25
PERCENT
20
15
10
5
na Ch i
or e M al ay si a
ng ap
nd ila a Th
Si
wa n
a di In
Ta i
n pa Ja
So ut h
d ite Un
7 es -2 at EU St
Ko re a
0
Fig. 18.1: Average annual growth rates in number of researchers, by country/economy: 1996–2007
UK Academics In the U.K., there are 162 higher education institutes. The number of students placed by UCAS (Universities and Colleges Admissions Service) in higher education exceeds half a million (see UCAS’ End of Cycle Report, December 2014). 512,400 people secured places in U.K. universities and colleges, up nearly 17,000 on 2013 (+3.4%). More UK students than ever were accepted into U.K. higher education (447,500, +3.2%) alongside record numbers of students from outside the U.K. The total number of applicants (699,700) almost equalled the levels seen in 2011 (700,200), the year before the introduction of higher tuition fees in England. Applicant numbers have increased from all U.K. countries (Tabs. 18.2 and 18.3, and Fig. 18.2).
244 Chapter 18
Academic Knowledge Workers
Tab. .: U.K. Applications for U.K. University Attendance
Applicants Accepted applicants
, ,
, ,
, ,
, ,
, ,
, ,
Source: UCAS, Cheltenham, December 2014 Tab. .: Students in U.K. universities by level of study / to / Year
Undergraduate
Postgraduate
Total
, , , , , , , ,
, , , , , , , ,
2012/13 2011/12 2010/11 2009/10 2008/09 2007/08 2006/07 2005/06
Source: HESA, Cheltenham, December 2014
Applications for full-time undergraduate university & polytechnic courses Applications 800,000 700,000 600,000 500,000 400,000 300,000 200,000 100,000 0 1965
70
75
80
85
90
95 2000 05
10
14
Sources: Ucas Fig. 18.2: Applications for fulltime courses
In terms of subjects studied, the breakdown of the main areas is shown in Tab. 18.4.
UK Academics
245
Tab. .: Breakdown of degrees by subject area in UK universities
Medicine and allied subjects Biological sciences Veterinary sciences Physical sciences Mathematical sciences Computer sciences Engineering and technology Law Education Social studies Business and administrative Total
/
/
/
, , , , , , , , , , , ,,
, , , , , , , , , , , ,,
, , , , , , , , , , , ,,
Source: HESA, Cheltenham, December 2014
40% of all students are studying in the hard sciences, with 60% of the softer sciences. In addition there were some 181,000 academic staff in 2010/11 (Tab. 23). In 2006/07, 23% of this total were doing research (Tab. 18.5). Tab. .: Breakdown by level of academic attainment in UK universities, / Professors Senior lecturers and researchers Lecturers Researchers Other grades Total
, , , , , ,
Source: HESA, Cheltenham, December 2011
Each year graduates and postgraduates from universities and colleges make a choice of where to seek a career. The majority do not stay within the university system – they move into professions such as law, engineering or medicine; others move into small and medium enterprises (SMEs), others remain on the periphery of research in small research institutes often in third world countries. Many take time out or swell the ranks of the unemployed. Each year over 650,000 attain a degree or diploma and are potentially available to work for SMEs, or independent research centres or to become citizen scientists.
246 Chapter 18
Academic Knowledge Workers
The destination of graduates and postgraduates will be analysed further in a later chapter.
Barriers Facing U.K. Researchers A study entitled Access to scientific content: Gaps and Barriers was undertaken in the U.K. by CIBER on behalf of the Research Information Network among others, and published in 2012 (Rowlands, 2011). Nearly half the 2,600 respondents to the survey registered that they had faced difficulty in accessing the full text of journal articles on ten or more occasions in the previous twelve months. Despite this, only 5.4% of the respondents in universities felt that access to journal articles was ‘poor’ or ‘very poor’, though this rose to 19.8% for those knowledge workers in SMEs and 22.9% for those researchers in manufacturing. This differential is significant and reflects the extent of the problems UKWs have in gaining access to scientific literature. The chart (Fig. 18.3) shows, for example, that conference papers are important but not easy to access, whereas clinical guidelines are not so important but are relatively easy to get hold of. Faced with barriers to access, the frequent response was “simply to give up and find something else” which does not auger well for efficiency and productivity. The study also pointed out that “there are around 1.8 million professional knowledge workers in the UK many working in R&D intensive occupations (such as software development, civil engineering, consultancy) and in small firms who are currently outside of the subscription arrangements. The needs of this sector of the economy demand greater policy attention.” (Rowlands, 2011)
U.K. Research Policy Despite the U.K.’s achievements in punching above its weight in terms of global research efficiency, effectiveness and excellence (see earlier) – the level of privatesector R&D investment in the U.K. is low and has fallen relative to competitive nations. The innovative capacity and potential of the U.K. is therefore not matched by its engagement with economic competitiveness.
Fig. 18.3: Gaps and Barriers to STM access (2012)
3.3 Patents
53.9
93.1 6.5 93.7 Research Review articles papers
6.9
74.5
33.0 Market research
2.8
Books monographs
57.7 Conf papers
5.5
5.9
4.6 Tech reports
71.5
43.8 PhD theses
4.9
3.2 Legal info
61.5 60.1
3.7 Standards
Source: Rowlands et al., Access to scientific content: Gaps and Barriers. CIBER, 2012
Not at all important =1
Extremely important =7
4.4 Trade pubs
64.7
4.0 Tech info
60.9
3.4
78.4
Clinical guidelines
Reference works
5.6 77.0
3.9 47.4 45.7 Training 3.7 materials Research Archival data records
60.3
4.9
easy or fairly easy to access (%)
0
10
20
30
40
50
60
70
80
90
100
U.K. Research Policy
247
248 Chapter 18
Academic Knowledge Workers
In 2010, The Royal Society published a report entitled ‘The scientific century: securing our future prosperity’. The advisory group was chaired by Sir Martin Taylor FRS and included two Nobel Laureates, two former Ministers of Science, and leading figures from high-tech companies (Taylor, 2010). It gave two urgent messages. The first is the need to place science and innovation at the heart of the U.K.’s long-term strategy for economic growth. The second is the fierce competitive challenge the U.K. faces from countries which are investing at a scale and speed that the U.K. struggles to match. The net effect of the recommendations in the report would be to make science the springboard for more innovation, not only within the core research institutions but also throughout society as a whole. It would also result in the U.K. remaining competitive in the creation of scientific output compared with other leading nations. It breaks the hold which the elitist scientific publication structures have over the scientific enterprise, and opens up channels for a wider involvement by non-affiliated researchers.
Geographical Shifts – U.S.A. The 2014 biannual report from the U.S. National Science Foundation provides a summary of the science and engineering workforce in the United States (National Science Board, 2014). The key points from this report were that in 2010, estimates of the size of the U.S. science and engineering (S&E) workforce ranged from approximately 5 million to more than 19 million depending on the definition used. In 2010, there were about 5.4 million college graduates employed in science and engineering occupations in the United States. Occupations in the computer and mathematical sciences (2.4 million) and engineering (1.6 million) were the largest categories having occupations with a science/engineering emphasis occupations. Occupations in the life sciences (597,000), social sciences (518,000), and physical sciences (320,000) each employed a smaller number of S&E workers. In 2010, about 19.5 million college graduates in the United States had a bachelor’s or higher level degree in a science or engineering (S&E) field of study. Almost three-quarters (74%) of these college graduates (14.5 million) held their highest level of degree (bachelor’s, master’s, professional, or doctorate) in an S&E field. Overall, the most common fields of S&E highest degrees were social sciences (40%) and engineering (23%). Computer and mathematical sciences, life sciences, and physical sciences together accounted for slightly more than one-third (38%) of individuals with S&E highest degrees.
Geographical Shifts – China
249
The application of scientific and engineering knowledge and skills is widespread across the U.S. economy and not confined to S&E occupations. The number of college-educated individuals reporting that their jobs require at least a bachelor’s degree level of technical expertise in one or more S&E fields (16.5 million) is significantly higher than the number in occupations with formal S&E titles (5.4 million). The S&E workforce has grown steadily over time. Between 1960 and 2011, the number of workers in S&E occupations grew at an average annual rate of 3.3%, greater than the 1.5% growth rate for the total workforce. Data from recent years indicated that trends in U.S. S&E employment compared favourably to overall employment trends during and after the 2007–2009 economic downturn. Between 2006 and 2012, the number of workers employed in S&E occupations rose slightly, whereas the total workforce shrank. The difference reported in the numbers of S&E workers in the U.S., ranging from 5.4 million to 16.5 million, is a reflection of the gap between affiliated knowledge workers and Unaffiliated Knowledge Workers. It also reflects the growing importance of STM qualified workers within the US economy in recent years.
Geographical Shifts – China The increase in competition faced by U.K. research has been pointed out above. A study from Thomson Reuters shows explosive growth in research output from China, far outpacing research activity in the rest of the world. At this pace, China will overtake the USA within the next decade as a publishing centre. The study, “Global Research Report: China”, informs policymakers about the research and collaboration potential of China and its current place in world science (Adams, 2009). China no longer depends on links to traditional G7 partners to help with its knowledge development. The study drew on data found in ISI/Thomson’s Web of Science. Key findings include: – China’s output increased from just over 20,000 research papers in 1998 to nearly 112,000 in 2008. The nation has doubled its output since 2004 alone. China surpassed Japan, the UK and Germany in 2006 and now stands second only to the USA – China’s research is concentrated in the physical sciences and technology. Materials science, chemistry and physics predominate. In future, rapid growth can be expected in agricultural sciences and life sciences fields such as immunology, microbiology, molecular biology and genetics. (Adams, 2009)
250 Chapter 18
–
Academic Knowledge Workers
The U.S.A. stands out in terms of being in collaboration with China in research; U.S.-based authors contributed to nearly 9% of papers from Chinabased institutions between 2004 and 2008.
China still has a rapidly developing economy with annual growth rates in the 8.5% to 9.5% range, although this has dipped in recent years. Europe’s economic growth is, by comparison, languishing in the range from static to plus 1.5% per annum, and the United States is barely much better at 2.5%. This is a factor which is beginning to distort the traditional balance of STM publishing.
Summary Researchers in universities, whether in the U.K. or worldwide, face several challenges. These are different from those faced by institutions and individuals outside academia. But the fact that academia also confronts some access barriers indicates that solutions are being sought to eliminate the dysfunctionality of the present system, and in so doing offers the scope for embracing a wider world of research-related activities in future. The concerns about dysfunctionality which have been addressed in chapters seven and eight are largely historical, about problems facing academic researchers during the past five years or so. Add in the drivers for Change as described in chapter Six, then the dynamic created by these drivers accentuates the level of concern about the status of scientific publishing as we know it. It becomes a matter of public concern, which in some instances is leading to government intervention to create a viable and level playing field for researchers in academia. Instances of government intervention which is beginning to occur are described in chapter 26. The popular solution is for public research funds to come with a mandate to have the research results made ‘openly’ available (see chapter 24). In the meantime, the issue which is central to this book is what are the constraints facing the wider market for scientific information, beyond academia. Does this wider market suffer similar access problems and to what extent? Who are these UKWs? This is the topic for the next chapters.
Chapter 19 Unaffiliated Knowledge Workers Introduction Thus far unaffiliated knowledge workers have been treated as a homogenous, identifiable group which is cut off from the main developments in research communication. However this is too simplistic. There are many types of UKWs, each with their own information profiles and cultures. So, who are those disenfranchised or unaffiliated knowledge workers who could become users of appropriate new information systems, and become a force in determining the outline of a more democratic approach to STM information access? There are several key sectors involved. These will be identified in the next few chapters. The following main groups are identified as being possible UKWs: – Many leading learned societies and professions, outside academia, still rely on high level research results to sustain and improve their professional standards. Some expect it as part of their membership benefits – the individual needs to be regularly updated, assessed and re-certified, taking into account latest developments affecting their profession. More generally there is an underlying mission to improve professional practice through adoption of latest developments which are often reported in magazines, research journals, newsletters and other published outputs. Latest developments should be available for easy access to all those practising in relevant professions. – There is a growing emphasis, in any society which is looking for economic growth and/or to reduce national debt, on small and medium enterprises (SMEs) bringing innovative new products and services to market. These often have a scientific, technical or medical content. In fact, some are spawned within university laboratories and subsequently floated on to industry, either in partnership with the university or as private ventures. But employees of SME’s, as with the professions, are not included within the closed circle of supply and demand for scientific literature once the umbilical cord with their university has been broken. The publishers’ target is the larger, wealthier corporations which have invested in an in-house R&D function supported by an information and documentation centre – one with funds to buy subscriptions to journals – and not these small SME start-ups. – There are also many ‘citizen scientists’ or ‘amateur scientists’ who have chosen to pursue careers in other areas outside academia and corporate
252 Chapter 19
–
–
–
–
–
– –
Unaffiliated Knowledge Workers
R&D, but who still retain an interest in the subject matter of their early academic training. Their mass–collaboration on scientific projects is seen with the SDSS programme in astronomy; the Ocean Observatories Initiative on the ocean floor; the Allen Brain Atlas in brain sciences; the Human Genome project and Haplotype mapping in biomedicine, amongst many others – all leading to massive data webs being created as a result of participation from thousands of keen amateurs. Another significant professional area is agriculture. It is becoming increasingly reliant on science and technology to lead it towards greater efficiency and higher crop yields. There is a life-style issue which needs to be addressed in order to meet their STM information needs. There are lobbyists and charities in the private sector which seek to bolster their respective missions with hard evidence drawn from scientific output, and are pushing for change to limit the onset of global climate change, to eradicate pollution, to improve social conditions, to save on energy, etc. In addition science writers and journalists also feed on accessible scientific literature. Then there are the administrators, advisors and consultants who, although at the fringes of the academic publishing system, nevertheless have a profound impact on the direction the industry can take, and are themselves to some extent unaffiliated and disenfranchised. There are also policy makers in government and among funding centres charged with implementing relevant research programmes. Even for those operating within the U.K. higher education system itself, in universities and research institutes, access to the wide spectrum of scientific information needs is not always that easy because of the various barriers which operate within academia (see previous chapter). This group would also include alumni and friends of the university. It also includes a few impatient academics unwilling to wait for their institutional information services to obtain required texts through traditional document delivery channels. Other disenfranchised communities are those researchers and knowledge workers operating in developing countries. According to a recent study by Mendeley, based on the 2 million users of their service (Mendeley, 2012) there is a strong correlation between a nation’s R&D spend per capita and their readership of research papers through Mendeley. There are also many others. A requirement for access to research information can be found in areas such as engineers and scientists working in remote installations where no library facilities exist;
Introduction
–
– – –
–
253
individuals working in financial institutions which are prepared to invest in new scientific-based businesses and have an occasional need to find out more about proposed projects. there is a broad platform of educated users, including people retraining or developing new skill sets also included are distance learners facing geographical challenges in accessing research libraries as walk-in users patients who are seeking everything there is to know about the illness from which they are suffering – to know as much if not more than the overstretched general practitioner; the general public interested in global warming and climate change, environmental protection, etc.
Although the Unaffiliated Knowledge Worker sector using the above definitions is large and diffuse, there are three main target groups that are the focus of the present book. – The Professions – Small and Medium Enterprises (SMEs) – Armchair scientists or Citizen Scientists. It has been guestimated by U.S. consultancy groups, albeit without much supporting evidence, that there are about 600–800 million knowledge workers globally, only 30 million of whom are in academic or corporate research areas. A recent estimate gives 50 million knowledge workers in the U.S.A. alone (NSF, 2014). This leaves a large section of global society as the latent market for scientific material. The Gartner Group, a U.S.-based consultancy, has estimated that knowledge work now represents the majority of jobs across multiple industries in developed communities. “Virtually non-existent only 100 years ago, knowledge workers now make up the largest slice, 40%, of the American workforce” claimed business management guru Peter Drucker. He further suggested that “Knowledge worker productivity is the biggest of the 21st century management challenges... (it is the) only real competitive advantage in a global economy” (Drucker, 1959). And it continues to grow. According to Morgan Stanley economist Stephen Roach “This is, by far, the most rapidly growing segment of white collar employment. Over the past seven years... knowledge worker employment growth has averaged 3.5% per annum, sufficient to have accounted for fully 73% of total white collar employment growth over this period.”
254 Chapter 19
Unaffiliated Knowledge Workers
Indications of the scale of potential users of scientific information found in various sectors are shown in Tab. 19.1. Tab. .: Gross estimates of the number of knowledge workers, U.K. and U.S.A. (/) Sector
U.K. knowledge workers (estimated)
U.S.A. knowledge workers (estimated)
, , ,, ,, , ? ? ? ,, ,
,, ,, ,, ,, ,,? ? ? ? ,, ,,
Engineering Accountancy Entrepreneurs Managers Distance learners Patients Hobbyists Citizen scientists Undergraduates Postgraduates Source: Outsell, 2007
The range of UKWs is therefore wide. Fig. 19.1. gives a rough indication of the areas which may be involved and the interplay between the types of knowledge workers in the U.K.
Alumni
City and Financial
Small and Professionals s i ssi ls Medium sized Enterprises En prises Volu Voluntary Vo nta Work e Workers Charities es
Agriculture
Entrepreneurs and Innovators
Research in R Academia Resear chers in Research Acade Research Resear earch h in mia Industry
Patients and Healthcare Workers
Policy makers and Research Funders
Citizen Dist scientists/ anc e Lear ners interested laypersons ns Dist anc e Lear Le ear e a ners ne ers Developing economies
Fig. 19.1: Overview of main areas of knowledge workers
Government officials
Retraining
Gen General Public
U.K. Knowledge Workers
255
The Unaffiliated As communities UKWs are growing in numbers and influence. They inherited a situation whereby they had become disenfranchised or unaffiliated within the mainstream of research outputs. An example of the annoyance which the current toll-based journal access system creates among one particular group can be seen from the Thunderclap which was created on 18th November 2013 and has been circulated among Twitter groups: Every time you hit a paywall is an isolated moment of frustration that is unlikely to shake the ivory tower of academic publishing. By putting these moments together using the Open Access button, we will capture your individual moments of injustice and frustration and display them, on full view to the world. Only by making this problem impossible to ignore can we change the system. This project was started by two students frustrated by the current system and driven to change the publishing system. The project was made possible by support from developers, advocates and the open access community at large. They developed a prototype which included a button that was able to track and map every time a user hit a paywall, and thereby help them get access to the paper for free. Advocates can use the stories and data the button collects to push for change. The developers of the system claimed the paywall problem affected many groups in society – “patients, students, doctors and academics”.
This is one reflection of the problem facing scientific publishing as it currently affects unaffiliated knowledge workers in the U.K. and throughout the world, and how it is being addressed so far in many isolated and disparate ways.
U.K. Knowledge Workers There are an estimated 11.1 million of so-called knowledge workers in the U.K. (Office of National Statistics, 2011) which contrasts with the numbers in academia in the U.K. alone where the number in U.K. higher education is 2.5 million (HESA, 2013). It also differs from the data which is made available by the Department of Business, Innovation and Skills (UKDBIS, 2009 ) which concluded that there were 1.8 million knowledge workers in the U.K. in 2009. This included 130,000 IT staff involved in R&D; 78,000 civil engineers, 67,000 civil engineers, etc. (see later). “Of these a considerable but uncertain proportion are unaffiliated, without corporate library or information centre support” (Rowlands, 2011). Tab. 19.2 looks at the breakdown of the ‘official’ knowledge workers in the U.K. as determined by the Office of National Statistics (ONS).
Managers and senior staff Officials
, , , , , , , , , , , , , ,,
All sector workers sector sector
, ,, ,, , ,,
,,
,,
, ,, ,,
,, ,,
,,
,,
,,
,
,, ,
, , ,
,
,
, , , , ,
Professional occupations
,,
,
, ,
, , ,
,
,
, , , , ,
Associate professional and technical occupations
Source: CIBER reworking of Office of National Statistics (Labour Force Survey) data. (CIBER, 2012)
Mining Manufacturing Electricity, gas Construction Wholesale/ retail Hotels/restaurants Transport, storage Financial Real estate, Public & defence Education Health/social work Other community, social Total
SIC () -digit code
U.K. Knowledge workers
Tab. .: Broad sector knowledge workers (-digit SOC) by -digit SIC code
,,
.
.
. .
,, , ,
. . .
.
.
. . . . .
% Knowledge workers
, ,, ,,
,
,
, , , , ,,
Total knowledge workers
256 Chapter 19 Unaffiliated Knowledge Workers
U.K. Knowledge Workers
257
Not all the above knowledge workers are at the coalface of research nor require access to the latest developments available in scientific publications. But some are – and their performance could be enhanced if they were able to gain the same level of access to relevant items of research results as their former colleagues still working behind academic garden walls. The breakdown of the numbers of Research and Development professionals by category in the U.K. can be seen in data provided by the Department Business, Innovation and Skills (UKDBIS, 2009) (Tab. 19.3). Tab. .: Numbers of R&D professionals in U.K. business sectors Professional Sector
Industrial & Engineering professions IT strategy & planning Civil engineers Mechanical engineers Chemical engineers Design & development engineers Electronics engineers Production & process engineers Planning & quality engineers Quantity surveyors Bioscientists & biochemists Pharmaceutical/pharmacol Physicists, geologists Subtotal Service sector Medical profession Dentists Opticians Software professionals Solicitors, lawyers, judges Legal profession (others) Management, business Accountants Management accountants Psychologists Social science researchers Social workers Probation officers Public service
Numbers employed (BIS)
Percentage of total professions
, , , , , , , , , , , , ,
.% .% .% .% ,% .% .% .% .% .% .% .% .%
, , , , , , , , , , , , , ,
.% .% .% .% .% .% .% .% .% .% .% .% .% .% (continued)
258 Chapter 19
Unaffiliated Knowledge Workers
Tab. 19.3: (continued) Professional Sector
Numbers employed (BIS)
Percentage of total professions
Architects Town planners Veterinarians SubTotal
, , , ,,
.% .% .% .%
TOTAL
,,
.%
Sources: “The sectoral distribution of R&D”, 2009 R&D Scoreboard. U.K. Department for Business, Innovation and Skills
The numbers are much less than in the previous (ONS derived) table, reduced by the greater focus on research-related activities rather than the more general knowledge work. However, the above table indicates just how much of a service economy the U.K. has become, with over 70% of the numbers of R&D professionals being in the service sector with the software and medicine related areas being responsible for a further quarter of all such professionals. The engineers amount to 14%. There is a large distribution of professionals in other areas – some 30 identified above and this is by no means exhaustive. Each will have research and information profiles which are separate and distinct – each requires separate analysis for its information behaviour characteristics and needs. At present there is no means of establishing what proportion of these narrowly defined knowledge workers are ‘affiliated’ to a central purchasing scheme for published scientific content. This area also requires additional study by means of targeted in-depth ‘niche’ sector assessments. In addition, this is only a small part of the total potential market size. It excludes the vast army of ‘citizen scientists’ for example, those who have an interest rather than a career in following scientific developments. They are not included in ONS’s 11.1 million U.K. knowledge workers nor in BIS R&D professional statistics. As mentioned earlier, there is little evidence available on the amount of information usage which is required by any of the ‘unaffiliated’ members of the ‘long tail’. It is speculated that an efficient and effective scientific dissemination system would extend the ‘tail’ of demand to an extent that would make it a viable target for new business approaches by scientific publishers, but there are few facts available to justify such an assumption. It is not surprising therefore that publishers have preferred to focus their attentions on the known institutional (research library) in their commercial activities.
The ‘long tail’
259
The ‘long tail’ LOOSE
For each knowledge worker sector – professional, SME and amateur scientist – there will be a spectrum of demand, but in most cases without the intensity which can be found in academia. But what the UKWs lack in intensity of use, they more than make up for in sheer numbers. This is where the ‘Long Tail’ becomes important (see chapter 14). As far as the knowledge worker market is concerned it suggests that if the ‘long tail’ of knowledge workers in all the above sectors were aggregated, the potential new demand for scientific information could equal if not exceed the current restricted demand within academic subscribing institutions. Nor is this a static position. The growth in the ‘tail’ (UKWs) is greater than the growth in the ‘core’. This is based notably on the trend whereby there is an annual migration from higher education into the UKW sectors. New graduates and postgraduates join the private sector in greater numbers than remain within academia. The scale of the growth is indicated by the HESA statistics on Destinations of Leavers (Tab. 19.4). Tab. .: U.K. Graduate employment to Degrees
Numbers (actual)
Paid work (in %)
Further study (in %)
Year 2002/3 All degrees First degrees Postgraduates
, , ,
.% .% .%
.% .% .%
Year 2008/9 All degrees First degrees Postgraduates
, , ,
.% .% .%
.% .% .%
Source: HESA, 2013
LOOSE
Going into paid work seems to have slipped between the two years. This may have as much to do with the economic conditions at the time as with the preferrences of graduates. It also relates to the type of degree awarded – in medicine and dentistry for example 92.2% go into (non-academic) work environments, whereas the subject generating the greatest numbers going into further study (32.3%) is law. Nevertheless, paid work outside further study (academia) is still three times greater than those graduates remaining within academia. Each year the tail gets larger. Tab. 19.5 expands on this to show that the actual numbers for the following year, 2009/10, were even more favourable to the UKW
260 Chapter 19
Unaffiliated Knowledge Workers
sector – almost 250,000 going into paid work, with over 50,000 remaining in further study. There are some grey areas, such as those graduates who go on to combine paid work with further study – 36,000. Tab. .: Employment Activity of U.K. Graduates /, /, / Activity
Fulltime paid work only Parttime work only Voluntary/unpaid work only Work and further study Further study only Assumed to be unemployed Not available for employment Other
/ %
/ %
/ %
/ Totals
.% .% .% .% .% .% .% .%
.% .% .% .% .% .% .% .%
.% .%
,
.% .% .%
, , , ,
.% .
TOTAL Source: HESA, 2013
Tab. 19.6 gives the percentage and actual numbers of graduates/postgraduates and their various destinations after leaving academia in 2012/13. This again shows that 70% – or at least 300,000 each year – of leavers are destined for non-academic work sectors. Only 55,000 (or 13%) continued with their studies. Tab. .: Leavers by activity / (all survey Respondents) Activity
Number
Percentage
Fulltime work Parttime work Primarily in work and also studying Primarily studying and also in work Fulltime study Parttime study Due to start work Unemployed Other Total employed Total study Total unemployed Others Total survey respondents
, , , , , , , , , 327,185 54,935 27,470 18,280 427,870
.% .% .% .% .% .% .% .% .% 76.5% 12.9% 6.4% 4.2% 100.0%
Source: HESA, 2014
The ‘long tail’
261
In effect, the split between the employed and remaining within academic is 76% (employed) and 13% (academic), excludes a small overlap. In addition there are almost 10% who are unemployed or otherwise outside the workforce. But over two thirds of all those graduating move into areas which could in principle prevent them having easy access to published research results. (HESA, 2010.) Tabs. 19.7 and 19.8 give even more breakdown of the split between full-time employment and further study by occupational discipline. The first of the tables are for Graduates; the second is for Postgraduates. Tab. .: U.K. output of graduates into knowledge-based occupations, / Profession
TOTAL Industrial & Engineering IT strategy and planning Civil engineers Mechanical engineers Chemical engineers Design & Develop eng Electronic engineers Production & Process Planning & quality Quantity surveyors Bioscientists/biochem Physicists, geologists Sub Total Services Medical professions Dentists Opticians Software professions Solicitors, lawyers Legal professions nec Management, business Managt, accountants
Numbers overall
Paid work Numbers
Paid work%
Further study numbers
Further study%
,
,
.%
,
.%
} } } ,
,
.%
,
.%
, , ,
, , ,
.% .% .%
, , ,
.% .% .%
,
,
.%
,
.%
,
,
.%
,
.%
,
,
.%
,
.%
,
,
.%
,
.%
} } } }
(continued)
262 Chapter 19
Unaffiliated Knowledge Workers
Tab. 19.7: (continued) Profession
Numbers overall
Paid work Numbers
Paid work%
Further study numbers
Further study%
, ,
, ,
.% .%
, ,
.% .%
Psychologists Social science research Social workers Probation officers Public service Architects Town planners Veterinarians SubTotal
,
,
.%
,
.%
,
,
.% .%
,
.% .%
TOTAL of Above
,
,
.%
,
.%
Tab. .: Number of UK postgraduates by destination, / Profession
Numbers overall
TOTAL Industrial & Engineering IT strategy and planning Civil engineers Mechanical engineers Chemical engineers Design & Develop eng Electronic engineers Production & Process Planning & quality Quantity surveyors Bioscientists/biochem Physicists, geologists Services Medical professions Dentists
, ,
Opticians Software professions Solicitors, lawyers Legal professions nec Management, business Managt, accountants
, , , , , ,
, , , , , , , , , , ,
Paid work Numbers
Paid work%
Further study numbers
,
.%
,
.%
.%
.%
, ,
.% .%
.% .%
, See above
.%
.%
,
.% .%
.% .%
,
.%
.%
} } } } } } }
Further study%
The ‘long tail’
263
Tab. 19.8: (continued) Profession
Psychologists Social science research Social workers Probation officers Public service Architects Town planners Veterinarians TOTAL
Numbers overall , , , , , , , ,
Paid work Numbers
Paid work%
Further study numbers
Further study%
,
.% .%
.% .%
.%
.%
.%
.%
,,
* Percentages amended to take account of the ‘Supplementary Subject’ information in the HESA statistics (Tab. 3a) Sources: “Destinations of Leavers from Higher Education Institutions 2008/09”, Higher Educations Statistics Agency, June 2010; “The sectoral distribution of R&D”, 2009 R&D Scoreboard. U.K. Department for Business and Innovation Skills, http://www.innovation.gov.U.K./rd_scoreboard/?p=37
Matching statistics from education sources (HESA) against those produced by another public agency (ONS) is not easy – it is approximate at best. But the key point is that of the 291,500 U.K. graduates in 2009, only 14% stayed on to become ‘privileged’ or ‘affiliated’ scientific information users whereas the majority took employment in various professions and businesses in the U.K., both public and private. The type of information service which would be appropriate would depend on the discipline in which the graduates and postgraduates focus. The digital balance in terms of published requirement in the arts and humanities differs from the physical sciences which also differ from the life sciences. Tab. 19.9 gives a breakdown of the disciplines of degrees which graduates entering employment had taken in 2011/12. Nearly 50% of the leavers were in the sciences (half of whom were in life sciences), with business studies, social science and creative arts also figuring as important contributors to the U.K. knowledge economy. The subject breakdown in given more detail in Tabs. 19.10 and 19.11 which distinguish between those going into the workplace from those going on to further study, and whether this is full time or part time.
264 Chapter 19
Unaffiliated Knowledge Workers
Tab. .: Fulltime degree leavers by subject area (/) Chart 4- Percentage of UK domiciled full time first degree leavers by subject area(#1) and activity 2012/13 Percentage labels in this chart for values less than 2.0% are not shown, all other percentages have been rounded to one decimal place, therefore they may not sum exactly to 100%. Unemployed
Other 2.0%
Medicine & dentistry
Further Study
92.4% 71.9%
Biological sciences
44.4%
7.3%
16.8%
Veterinary science
4.0% 6.6%
11.0%
7.2%
19.3%
4.4%
85.1% 14.4%
57.3%
Mathematical sciences
10.7%
48.3%
Engineering & technology
10.9%
64.7%
8.3%
67.1%
Social studies
57.9%
Law
14.4% 11.0%
38.2%
Business & administrative studies
Historical & philosophical studies Creative arts & design
6.7%
14.4% 13.4%
50.2%
Education Combined
10%
20%
30%
3.9% 7.4% 12.7%
50.9%
0%
21.9%
25.2% 65.6% 13.2%
40%
50%
60%
5.8% 5.0%
19.9%
7.4%
8.4%
70%
4.2%
8.7%
3.9%
7.2%
4.6%
8.7%
5.2%
6.4% 5.0%
5.4%
12.3%
43.6%
7.3%
13.5%
22.2%
45.8%
5.1%
28.5%
54.7%
Languages
5.8%
8.6%
11.6%
6.4%
11.0%
62.6%
Mass communications & documentation
7.3%
13.0%
7.8%
5.3%
8.4%
5.0% 5.2%
8.6%
23.0%
63.1%
Architecture, building & planning
9.7%
26.0%
8.3%
7.6%
Computer science
5.5%
4.9%
2.2%
44.6%
2.9%
Physical sciences
2.7%
Agriculture & related subjects
4.3%
4.2%
2.4%
Subjects allied to medicine
2.3%
Work and further study
3.0%
Part-time work
4.2% 15.4%
80%
8.9%
5.1%
10.7%
4.7%
7.4%
5.9%
8.2%
5.5%
8.7% 11.4% 5.8%
90%
4.6% 3.2%
3.0%
Full -time work
6.2%
100%
# see relevant footnote in Notes to tables. © Higher Education Statistics Agency Limited 2014
Tab. .: Destination of U.K. University leavers who obtained first degrees by subject area and activity / Discipline
Medicine & Dentistry Subjects allied to Medicine Biological sciences Veterinary sciences Agriculture & related subjects Physical sciences Mathematical sciences Computer sciences Engineering & technology
FullTime Work
PartTime Work
FullTime Study
PartTime Study
, , ,
, ,
, ,
, , , ,
,
, , ,
Research and Development Employment in U.K. Industry
265
Tab. 19.10: (continued) Discipline
FullTime Work
PartTime Work
FullTime Study
PartTime Study
Architecture, building Total Science Social studies Law Business & administration Mass communications Languages History/Philosophy Creative arts & design Education Combined studies
, 53,725 , , , , , , , ,
11,125 , , , , , , , ,
13,945 , , , , , , ,
820
Total all subjects
120,635
32,885
31,970
2,315
Source: Destinations of U.K. Domiciled leavers who obtained qualifications through fulltime study, HESA, 2010. (Table 3a)
Research and Development Employment in U.K. Industry Meanwhile, in the U.K., there is little consistency in the statistics given for R&D being undertaken within the non-academic sector specifically. Tab. 19.11 gives the overall dimensions according to two sets of national R&D data. Tab .: U.K. R&D in Professional and Engineering Sectors Professions
Numbers Employed
Number of Firms
R&D Data
ONS Data ( in £m) (A) IT strategy and planning Technology hardware Mobile communications Telecommunications Civil engineers Oil and Gas production Mining
,
,
BIS Data ( in £m) (B)
, ,
, , (continued)
266 Chapter 19
Unaffiliated Knowledge Workers
Tab. 19.11: (continued) Professions
Numbers Employed
Number of Firms
R&D Data
ONS Data ( in £m) (A) Industrial engineering Aerospace and Defence Mechanical engineers General industrial Industrial transport Gas, water utilities Oil equipment services Design & Development eng Personal goods Leisure goods Production & Process engineer Forestry and Paper Planning & Quality engineering Electrical engineers Electronics engineers Quantity surveyors Household/home Bioscientists and biochemists Food producers Beverages Tobacco Pharmaceutical/Pharmacology Chemists Chemical engineers Physicists, geologists, meterol Industrial metals SUBTOTAL ENGINEERING & INDUSTRY Medical practitioners Software professionals Solicitors, lawyers, judges Management consultants Life insurance
,
,
, , ,
BIS Data ( in £m) (B) , ,
,
, , , ,
,
,
, ,
,
,
,
, , , ,
,
, , , ,
Research and Development Employment in U.K. Industry
267
Tab. 19.11: (continued) Professions
Numbers Employed
Number of Firms
R&D Data
ONS Data ( in £m) (A) Accountants (certified & chart) Bankers Social workers Accountants (management) Architects Dentists Public service workers Psychologists Town planners Legal profession nec Opticians Veterinarians Probation officers Social science researchers R&D support Wholesale & Retail Miscellaneous
,
BIS Data ( in £m) (B)
,
, , , , , , , , , , , ,
SUBTOTAL SERVICES Not listed above
,,
TOTAL
,,
,
,
,
,
Sources: (A) Table SB2 – Expenditure on R&D performed in the U.K. Businesses: 2001 to 2008. U.K. Business Enterprise Research and Development Statistical Bulletin, 2008 (11 December 2009). (B) Department for Business Information and Skills – Scoreboard. See http://www.innovation.gov.U.K./rd-/?p=11
The above data elements have been brought together in the Tabs. 19.12a and b. The above summary tables show that there are gaps in available data, and (depending on definitions employed) data differs for the same variable. It highlights the lack in quality among demographic data resources. It makes estimates of numbers of unaffiliated knowledge researchers in academia difficult to assess.
268 Chapter 19
Unaffiliated Knowledge Workers
Tab. .a: Integration of data sources Professional area
Society membership
Medical Bioscience Engineering Veterinary Dentists Architects Physical science Mathematicians Computer s/IT Law Education Business
R&D in professional areas
DBIS dashboard data
First employment after degree
Degrees
, , , , , , ,
, , , , , , ,
, , ,
, . , ,
,
,
, ,
, ,
,
,
, , , ,
, , , ,
, , , , , , ,
, ,
Tab. .b: Integration of data Professional Area
Medical Biosciences Engineering Veterinary Dentists Architects Physical scientists Mathematicians Computer scientists Law Education Business Distance Learners
HESA employment data
HESA graduates
HESA postgraduates
BSO Knowledge workers ,
, , , , , , ,
, , ,
, , ,
, ,
,
, ,
, ,
,
,
,
,
Outsell data
,
,
,, , ,
Using best estimates from several of the science/professional areas it would appear that the annual intake of first degree graduates into the areas listed is about 10% of the total R&D employees in that area (HESA data). Furthermore, that there are 1% of the annual R&D employees are intakes of postgraduates. This would suggest that the ‘unaffiliated’ researchers outside academia are growing at rates which exceed overall population growth and also academic enrollment figures.
Tentative estimate of the latent demand from Knowledge Workers
269
In descending size, Tab. 19.13 indicates those unaffiliated research areas which would warrant attention as a focus for future scientific information programmes. (Figures in brackets are estimates based on the above data). Table . Key areas for academic intake into professions Discipline/Profession
Computer/IT Engineering Medicine Biosciences Physical scientists Architects Dentistry Veterinary
Numbers of research staff
Annual graduate intake into profession
Annual postgraduate intake into profession
, , ,
, , , , , , (,)
, , , , , , ()
, , ,
These are the areas which warrant consideration in developing outreach programmes for UKWs.
Tentative estimate of the latent demand from Knowledge Workers As has been stressed so far, there have been few attempts to indicate what the ‘new market’ for scientific content serving the professional knowledge workers could be. There are many assumptions which are subject to further analysis, but the following thought piece reflects on the opportunities which may lie outside the current scientific publishing system.
Estimates of Demand for Articles – –
–
In 2006/07 UK academics and students downloaded 102 million full text articles (according to COUNTER statistics) Assuming the rest of the affiliated UK sector (government labs, research institutes, pharmaceutical companies, other large corporate R&D centres, etc.) add another 40% to the above, this amounts to 143 million downloads. Applying a growth factor of 10% in download traffic between 2006/7 and currently, this gives almost 160 million downloads within the ‘affiliated tent’.
270 Chapter 19
–
–
– –
Unaffiliated Knowledge Workers
Taking the UK as being typical of the global academic and research scene, it is possible to apply the UK share of the research output to the worldwide download traffic estimate. The UK output of STM material is approximately 6.6%. This would mean the global downloads of scientific articles would be 2 billion per annum. The ONS has provided data which gives an estimate of 11.1 million knowledge workers in the UK. This would be the broadest extent of the professional knowledge worker ‘long tail’. 4.1 million affiliated users (academics including R&D staff) in the UK generate 41 downloads each per annum (=168 million) In the UK, for demand in unaffiliated markets (7 million) to become as great as in affiliated markets, the average knowledge worker would need to consume 24 articles per annum. The spectrum would be anything from zero to 40 documents (the latter for those involved in fulltime R&D). (Source: Dr I Rowlands, Personal Communication)
Looking at it another way, for each article per annum that knowledge workers in the UK download, the additional (download) traffic to the publisher would be about 7.7%. If one assumes that the last quartile might represent the average for the typical ‘long tail’ user (approximately 10 article needs per annum) this would generate 110 million additional downloads. This would – given the right commercial incentives – lead to an additional market for publishers of 70% in their UK traffic, or 1.4 billion additional document downloads worldwide. All this hinges on the business model of the article economy remaining in place. Open Access, with its free to the end user, threatens the stability of this system by not providing a strong enough case for ongoing commercial viability.
Global Numbers of Knowledge Workers There are no accurate statistics on the number of people worldwide who have passed through the higher education system, have become knowledge workers (both affiliated and non-affiliated) and have an interest in the output of scientific research. All that can be said is that the ‘long tail’ of latent demand is large. Estimates given by several U.S. consultancy companies (including Outsell and Gartner) puts the worldwide number of knowledge workers at 600 million to 800 million. Microsoft puts the number of knowledge worker prospects at 500 million (Microsoft, 2010). It suggests that as much as 12% of the world’s population may have some ‘knowledge worker’ element in their profiles.
Summary
271
The number of knowledge workers which operate specifically within the university and large corporate R&D departments has been estimated to be 30 million worldwide. The difference between the two – the difference between those who have relatively easy access to published research (30 million) as against those who would struggle to get access (the remainder of the 600–800 million) – gives an indication of the extent of the ‘long tail’ in scientific information. For the United States alone a figure of 40 to 50 million knowledge workers has been proposed (DeepDyve, based on U.S. Census data). Other commentators feel that 40% of the U.S. working population is ‘learned workers’ in the broadest definition. In the U.K., the Office of National Statistics (ONS) gives an estimate of over 11.1 million. However the definition of ‘knowledge worker’ varies by country and statistical compilation system.
Summary The present situation can be categorised as follows: – Identifying the published sources of new research results has become easier for a wider sector of society as the global search engines have become more powerful, and their results – which are accessible for free – have become more comprehensive in their coverage. – There has been a collapse in the market for ‘personal subscriptions’ to learned journals because of the high journal prices (Tenopir & King, 2000). Scientific communication has become an institutionally-focused activity. – Public access is being denied in many cases to research results which were funded by the public purse. Published results are ring-fenced by publishers and only available to those institutions which are able and willing to invest in journal subscriptions, some involving annual acquisition charges of tens of thousands of pounds per title per annum. – There is a ‘long tail’ of individual document/article demand, much of it latent. – Nevertheless, greater ‘openness’ is allowing more information to be obtained without charge. – Communication of research activity is increasingly adopting social media as the medium of exchange, which is eroding the traditional monopoly of the research article and journal. The traditional ‘disenfranchised’ and ‘unaffiliated’ are gradually becoming empowered and enfranchised, and demography in higher education is drawing affiliated and unaffiliated knowledge workers closer together.
272 Chapter 19
Unaffiliated Knowledge Workers
What is important from the above chapter is that there is a large workforce in the UK which can be categorised as being ‘unaffiliated knowledge workers’. Numerically and demographically they constitute a sector of society which warrant more detailed investigation than that which has been focused on them in the past. Greater precision and clarity can be achieved by looking at the constituency of knowledge workers in the UK – particularly the professions, SMEs and citizen scientists. What are the unique problems facing each of these areas? This issue will be tackled in the next chapters.
Chapter 20 The Professions Introduction There are unique challenges which researchers face once they leave academia for the private sector. The challenges differ according to which UKW sector they are in. The following chapter looks specifically at professions, in particular professions which have a strong scientific, technical, engineering or medical orientation.
What is a Profession? A professional is someone who receives important occupational rewards from a reference group whose membership is limited to people who have undergone a similar specialised formal education and have accepted a group defined code of proper conduct (Wilson, 1989). There is a greater lock-in to a set of standards, procedures and approach to ensure that professional status is enhanced. A professional is different from an academic researcher. Those in professions therefore have a culture which differs from academics in the following respects: – they differ in their response to peer pressure: – they have different funding drivers – they differ in their success criteria – they do not seek global recognition – their main allegiance is to their professional association – they operate outside the closed (elitist) system of scientific research information. Nevertheless, professionals are educated to standards comparable with the 20–30% graduates/postgraduates who remain in higher education/academia. They cannot be distinguished from academics solely on the basis of their specialist knowledge. What distinguishes them – and this is central to this book – is their lack of easy access to those publications which were tools used as part of their formative higher education or training. A professional often becomes a gatekeeper by providing a necessary social function but also controlling that function – maintaining its existence. Mass
274 Chapter 20
The Professions
professionalisation is something of an oxymoron – a professional class requires a specialised function, minimum tests for competency, and a minority of members. Mass is not part of the agenda for a viable profession. A profession is created as a result of scarcity of skills, expertise or knowledge, and the members are often the last to recognise when the scarcity that created their community disappears. It is easier for the closed group to accept that they face competition rather than redundancy or obsolescence. The new competitor to professions is the Web. Revolutions involve a long period of chaos in moving from A to B. New technology, for example, has to become normal, then ubiquitous, and finally so pervasive as to be invisible before really profound changes happen. Professional self-defence, valuable in ordinary times, becomes a disadvantage in a revolutionary period because professionals are mainly concerned with threats to their careers and corporate existence. It builds conservatism into the structure of a profession – in much the same way as conservatism has kept the scientific communication process unchanged in recent years. Professionals often operate within a learned society structure. Many of the leading learned societies rely on: – high level research results in order to sustain and improve their professional standards – latest developments related to their professional expertise being circulated so that they can remain relevant and up-to-date – re-certification, re-assessment and education programmes so that quality can be monitored – improvements to professional practices being adopted in line with social needs. Publications, including the publication of a specialist journal, often become part of the package of support services provided by the professional institution to its members. However, the media landscape has been transformed because personal communication and publishing, which were previously separate functions, now shade into one another. New ways of reaching out to members of a profession are now possible through the Internet. By the same token, besides the change in formats for communication with its members, many professional societies have felt the need to subcontract the publication of their house specialist journal to third parties to avail themselves of operational scale (as an organisation) and professionalism which outsiders can offer. In many cases this subcontracting is handed over to large commercial journal publishers (see chapter 23 on ‘Learned Societies’).
List of professions
275
List of professions The following is an indicative list of professions which operate within the U.K.: – Accountants – Actuaries – Advocates – Architects – Archivists – Audiologists – Dentists – Diplomats – Doctors – Economists – Engineers – Financial analysts – Information and Communications Technologists – Journalists – Lawyers – Military Officers – Neuroscientists – Occupational therapists – Optometrists – Nurses – Pharmacists – Philosophers – Physicians – Airline pilots – Professors – Psychologists – Scientists – Social workers – Software engineers – Speech Language Pathologists Statisticians – Surgeons – Teachers – Translators and interpreters – Veterinarians
276 Chapter 20
The Professions
The list is not comprehensive – there are many more professions and sub-professions, and many more emerging on the back of changes in society. This is particularly noticeable in the financial and business sectors at present. Each of the above professions has a different approach to information needs in their daily professional lives. Each will have its own ‘information culture’, a combination of the legacy traditions and the accommodation to new communication services. Though few professions have analysed their information cultures in much depth, there are a few isolated studies which give an insight into the variability in approach in ‘keeping up to date.’
User Behaviour of Business Researchers Users of business information differ from other professionals. There is an emphasis on immediacy and online as preferred delivery options. However, in order to meet business information requirements, an understanding of the purposes for which the information is needed, the environment within which the user operates, the skills required for identifying the needed information, the speed with which information should be delivered, and preferred channels and sources of information. As Kanter (2003) highlighted, information has become a critical asset for companies. As such good and reliable sources are a requisite. However, only one quarter of the respondents physically visited the library at least 2–4 times per week as recorded in his study. This increased by 17% in terms of monthly visits, and by 44% per semester. Therefore, although heavy use is made of electronic information, access to a library with its physical store of print and electronic material is still only an occasional event in the business studies calender. In March 2008, the consultancy group Cap Gemini UK published a report entitled ‘The Information Opportunity’ by Ramesh Harij (CapGemini, 2008) which suggested the estimated annual costs associated with poor decision making across UK businesses and public sector organisations as a result of inferior information access was £67 billion. The report, based on in-depth interviews with senior leaders from FTSE 350 and U.K. public sector organisations found that there was ‘a broken information culture.’ The values and behaviours associated with how they collect, use, manage and share information were not working efficiently. It was believed that lack of good information suppressed performance by an average of 29%. This equated to an annual £46 billion missed opportunity for private sector profits, and £21 billion in extra administrative costs across the public sector. The main reasons for the frustration among U.K. business leaders were:
User Behaviour of Economists
– – –
277
decisions have to be made on inadequate information despite a doubling of information in recent years there is a failure to share this multiple versions of the same data exist without the authoritative version being apparent.
Mansfield (Mansfield, 1991) attempted to measure the returns to R&D for those innovations that are directly related to academic research. From a survey of R&D executives in U.S. firms, he found that around 10% of new products and processes would not have occurred within a year in the absence of relevant academic research. These contributed to 3% of sales and 1% of costs. By 1998, Mansfield found these percentages had increased to 5% of sales and 2% of costs. This study was repeated in Germany (Beise & Stahl, 1999) with similar results – around one tenth of innovations relied on the results from publicfunded research and accounted for 5% of new product sales. It would be wrong to claim that scientific information is at the forefront of the challenge facing professions as a whole, but it could be considered part of their information-use problem. Management of critical information for decision making purposes is a problem affecting the entire private sector – from SMEs to large multinationals. They are different in nature, but together represent a challenge for which scientific publishing industry needs to confront as the drive for efficiency grows.
User Behaviour of Economists Economics journals ceased being a primary way to communicate ideas at least 25 years ago, replaced by working papers – publication was more about certification for the purposes of academic tenure than anything else. Partly this switch away from journals was because of the long publication time lags – as one commentator has said, by the time his most successful academic paper was actually published (in 1991) there were around 150 derivative papers which had emerged around the topic. In other instances, rigid ideologies blocked new ideas. Ken Rogoff wrote about the impossibility of publishing realistic macro data in the face of “new neoclassical repression” (Krugman, 2013). The key discussions in macroeconomics, and to a lesser extent in other fields, is taking place in the econoblogosphere. This is true even for research done at official institutions such as the IMF and the Federal Reserve. People read their working papers online, and that is how their work gets incorporated into discourse.
278 Chapter 20
The Professions
User Behaviour of Engineers Engineers conduct more searches for equations online, but they are dissatisfied with the results. This was found during a 2013 survey of 200 engineers conducted on behalf of the information company Knovel (Knovel, 2013). A majority of engineers surveyed look for equations at least once a week, and increasingly they searched via the Internet first, rather than reference works, handbooks and other sources. Use of Internet search tools has grown significantly, from 59% of engineers surveyed in 2010 to 78% of engineers surveyed in 2013. Yet, engineers do not easily find specific equations they need for a variety of engineering tasks online. There is also low satisfaction with the process of documenting, validating and saving equations for future use. The 2013 survey, was primarily focused on mechanical engineers from companies with more than 1,000 employees. It appeared that online search surpassed printed material (including books and manuals) as the first place that engineers go to find equations. 92% of engineers searching online relied on public search engines such as Google, up from 41% in 2010. Although Google is the first place engineers turn to, it was the least satisfying for results. Many things are easier to find online now, but on Google specific equations are not among them. Once engineers find the right equation, they face additional challenges including using the right tool to perform calculations and accessing an integrated solution to validate and share calculations. (47% surveyed write the equation out on paper). Engineers rely on several different software tools, including scientific calculators, MATLAB and Mathcad, for their calculation needs. However, Microsoft Excel tops the list for use and dominates how engineers share equations, even though Excel was not designed with engineering needs in mind. While engineers do go online to search for the equations they need, they still go offline for calculation and validation of their work. 87% surveyed used their hard drive to save their calculation and 84% used Excel to share their calculation with peers. “The web offers convenient and seemingly easy search options for engineers looking for resources to support them as they perform their jobs, but they need to find relevant and reliable answers they can trust to increase productivity as well as their confidence in the results,” said Knovel’s Meagan Cooke, Senior Director of Product Management, Content in a 2013 press release available on the Knovel web site. Knovel is now an Elsevier company. Elsevier has seen the focus which Knovel brings in serving a professional (as distinct from academic) audience, as a valuable strategic move for the company to take.
Summary
279
Summary There are a growing number of professionals in society as universities churn out more and more graduates, an increasing proportion of whom seek work outside academia, including in the professions. Professionals share with academics a deep understanding of a particular branch of science. Their requirement to be active at the research coalface may not be as intense as their university based peers, but remaining in touch with the main developments in their research area might prove beneficial as they pursue their careers. As society demands that the results of research are applied to the benefit of society, an efficient mechanism to enable professionals to make use of latest research results becomes more important. Procedures and systems need to be put in place to make this happen. Currently there is a distance between the latest research and professional bodies supporting their members. This can only change in future, and professions may lead the move towards more open science.
Chapter 21 Small and Medium Enterprises Introduction A further sector of society which is not having its scientific information needs fully addressed are small and medium enterprises (SMEs). These are small private companies which have limited resources compared with universities and large corporations but nevertheless are still often pioneers and innovators, helping to change the direction which industry and the economy takes in future years. They are less bureaucratic, less hierarchical and less focused on protecting traditional practices, corporate structures and activities. According to the latest available ONS figures there were 2.10 million enterprises registered for VAT and/or PAYE in March 2010, compared to 2.15 million in March 2009, a 2.4 per cent decrease. The professional, scientific and technical sector accounts for the largest number of businesses with 15.4% of all enterprises registered. This is followed by construction with 13.1%, and retail with 9.0%. The distribution of enterprises by employment size band shows that 88.6% had an employment of less than 10, and 98.0% had less than 50 employment. Large enterprises, those with 250 or greater employment, accounted for only 0.4%. Numerically, there is a heavy skewing of corporate size in favour of the smaller enterprises, something which is not reflected in the uptake of STM purchases. There is a national corporate structure which has not as yet accommodated the business model set by traditional sci/tech/med journal publishers. As chapter 18 has pointed out, there are a growing number of graduates and postgraduates finding employment in the private sector each year, and they come to companies with a scientific training and the ability to make use of this in a non-academic environment. There is also potential for greater collaboration between academia and industry as universities see their mission being extended to support public/ private initiatives. But this would always be hampered by the inability of one side in this collaboration having unequal access to research results. There is as yet only marginal interest being shown by public or government bodies in improving information access for SMEs. These attempts are straight-jacketed by the prevailing business models. Government agencies have been reluctant to interfere in the commercial practices of STM publishing – whether in terms of implementing strict merger and acquisition policies (M&A) in the 1990’s or
Ware Study
281
being more consistent on open access policies. There may be some fine words expressed but insufficient activity in support of SME informational needs. The consequence is that SMEs is a sector of unknown significance for STM research material and remains somewhat of a backwater as far as STM publishers and government activities are concerned. An indication of the importance of SME can be distilled from a few studies which have been reported on in this area in recent years.
Ware Study A detailed analysis of SMEs was undertaken for the Publishing Research Consortium by the British consultant Mark Ware (Ware, 2009) who investigated information access problems facing staff in small and medium enterprises – i.e., outside the walls of both academia and large industrial organisations. SMEs are traditionally defined as enterprises with less than 250 employees. This can be subdivided further by employee numbers. U.K. and European Union statistical offices define small businesses, for example, as those having fewer than 50 employees; those with 50–249 are medium-sized enterprises. Some statistical compilations also include a category of ‘micro businesses’, which are those with less than 10 employees. Ware’s research showed that, whilst there is an improvement in access to scientific literature by the broader category of SMEs, their overall efficiency with regard to STM information usage is hampered by high article prices, misleading or uninformative abstracts and complex payment mechanisms – all of which are publisher controlled features. Ware’s conclusions are that if STM publishers were to exploit the needs of the SME community they would need to (a) lower prices, (b) simplify Web 2.0 interfaces and (c) provide access models which allow the user to experience content more intimately before making a purchase. Such changes would enable the ‘long tail’ of SMEs to more easily be brought into the scientific communication system. However, this was only part of the picture. More active and innovative business strategies to promote availability of research literature to a wider group of SMEs in more diverse ways would also be required. There are many companies outside the mainstream of research publications. Even though these are not directly relevant to the SME’s R&D or innovation activities. The numbers are striking – according to Ware in 2006 there were 4.7 million companies in the U.K, or twice as many as the ONS data suggests. 99.9% of these were SMEs. Furthermore, 99.3% were businesses with less than 50 staff. In aggregate these represented 59% of all private sector employment, and 37% of overall turnover.
282 Chapter 21
Small and Medium Enterprises
It is likely that those SME institutions which are nearer the top of the arbitrary size level (250 of employees) may exhibit research and information gathering habits similar to the large corporate research companies. Therefore the Ware report combined elements of the affiliated (the larger of the medium sized organisations within his SME framework) with the different challenges facing the much smaller and independent operators who are unaffiliated. As such the distinctiveness of the up to 25 employee organisations – more representative of genuine SMEs – is lost. It was assumed by Ware that a majority of micro-businesses and SMEs would have no interest in scientific or research publications. This assumption could be challenged given the increase in output of potential innovators going straight from the higher education system into start-ups within U.K. industry (see previous chapter).
Economic impact of innovative SMEs Though R&D may be seen as the lifeblood of economic progress, it was estimated (by Mansfield, 1991) that only 5% of total sales could be attributed directly to academic research, Mansfield has shown that the proportion of new products launched between 1986 and 1994 which depended on R&D varied between 5% and 31% depending on business sector. Another commentator, (Henry, 2007), claimed that SMEs made a major contribution to the commercialisation of emerging technologies, and that universities played a significant part in this process. Some 22% of SMEs attributed new product ideas to research undertaken within universities. Specific issues raised by Ware included findings which make clear that there is a subset of SMEs for whom access to research literature is highly important to their success. Barriers to accessing scientific information were more serious among SMEs (fourth in terms of the list of barriers) than large corporations (where barriers to access ranked 10th). SMEs attached a high level of importance to research articles, putting them higher than other forms of reference publications in their need for access. This is in contrast to large corporate research centres which ranked technical information and standards publications higher than research articles. However, the SME audience surveyed by Ware was selected on the basis of their being technical innovators, which could have biased the findings. Ware pointed out that whilst access to journal articles was easy/fairly easy for the 71% of respondents, this figure is less than for large companies (82%) and universities (94%) who benefit from being affiliated. A majority of SMEs (55%) have experienced difficulties in accessing an article. This is also higher
Economic impact of innovative SMEs
283
than that given by large companies (34%) and universities (24%). For example, as an indicator of the distortion caused by the blanket SME coverage, there is evidence that ‘subscriptions and licences’ represented 42% of SME information usage in the Ware coverage, and personal subscriptions and society memberships 22%. These are proportions more reflective of an affiliated access community rather than non-affiliated organisations whose subscriptions/licences are likely to be negligible. The Ware survey was based on subscribers to technical industrial/trade publications, STM journal authors and individuals who had purchased articles by pay-per-view (PPV). 29,090 emails were sent out to this group, and a total of 1,131 completed questionnaires were received (4% response rate). However, only 186 of these came from SMEs (and, as pointed out earlier, many of these would reflect more the affiliated community rather than the unaffiliated). The other responses came from large corporations (111), universities (470), and research institutes (363). So the working number of responses for the purposes of this study is a proportion of the 186 which are ‘small’–98 responses came from organisations employing less than 25 people. Mark Ware’s study nevertheless highlighted some of the difficulties facing an information service geared to end users in the non-institutional sector. For SMEs these difficulties were: – perceived high prices for published information, – the need to review the full text even of irrelevant articles to assess their value (as a result of uninformative and misleading abstracts) – the need to buy articles from a plethora of individual publisher web sites none of which adopt a standard approach even after Google, PubMedcentral, etc., has identified sources for the article. – several respondents claimed that company purchasing procedures stood in the way of easy acquisition of required articles. Very little attention was given to extending ‘walk-in’ rights to academic libraries which might confer benefits on SMEs. Walk-in access to the local academic library only represented 2% of usages, and only five of the interviewees had made use of the potential which this gave for gaining broader access. One of the difficulties mentioned was the problems of travel which the SME representatives would face in visiting the academic or public library, only to face restrictions on the use of the electronic journal when they finally turned up at the library. SMEs overall in the Ware survey indicated that 5% of the usage came from pay-per-view (PPV) which is a factor of 5 or so greater than that in the university or corporate sector. Almost one third of the SMEs among Ware’s respondents
284 Chapter 21
Small and Medium Enterprises
had used PPV at least once per month, compared with only 14% of large companies and 7% of academics. Smaller SMEs might be more responsive to buying individual articles if a more appropriate pricing model were put in place. However, Ware’s view was that “Pay per view is not currently a frequently-used channel and our interviews suggest that it has a number of unattractive features for users that are likely to limit its expansion in its present form”. He also felt that the iTunes model of charging (see DeepDyve, in chapter 25) had very slight chance of success. However his report was written over five years ago, since when the elements of the ‘perfect storm’ and other changes may have altered perceptions within the SME market. Of interest was Ware’s assessment that access via an academic or interlibrary loan (through a local public library) was rare, representing only 1–2% of usages. Ware did suggest that the learned society or membership group may increase its library function to help with access for its members, but this was a suggested option for investigation rather than a quantified recommendation.
Access to Research Information in Denmark In a report to the Danish Agency for Science, Technology and Innovation and Denmark’s Electronic Research Library, John Houghton, Alma Swan and Sheridan Brown (from Key Perspectives) covered similar ground to that undertaken by Mark Ware. Essentially they used a questionnaire and interview approach to assess the gaps and barriers facing scientific communication access in Denmark’s small/medium enterprises (SMEs). Their survey (Houghton et al., 2011) was based on 98 questionnaire responses and 23 interviews among small firms in Denmark (of which 49% had fewer than 10 employees). Respondents were mainly in research or senior management. As with similar surveys, research articles, patent information, sci/tech standards, and market reports were seen as important sources for STM information. 48% rated research articles as the most important source. The SME users’ greatest difficulty came in accessing market survey reports and also postgraduate theses. In terms of importance to the users, both research articles and market reports were seen as most significant and also most difficult to access. Access to toll-based information was mainly through personal subscriptions (62% of respondents) and in-house libraries (57%). Public libraries, interlibrary loans (ILLs) and pay-per-view (PPV) were used infrequently. Use of open access publications was somewhat prevalent–50% used IRs (institutional repositories) and OA journals at least monthly. Among researchers only, 72% claimed to use IRs and 56% OA journals. 68% of respondents read research
Other Studies
285
articles at least monthly (although amongst researchers it was 85%). 38% of the respondents had difficulty accessing journal articles (and a further 41% had occasional difficulty). Only 6% never had problems. The average time librarians spent accessing difficult-to-access journal articles was 51 minutes (63 minutes for researchers). Assuming an hour is average, this is costing the Danish research community €72 million p.a. Not being able to access journal articles as and when needed as part of the research process was said to have resulted in delayed or abandoned new products, services or processes by 27%. The SMEs’ new products contributed an average of 46% on annual sales. Other key findings in the survey included: – the value of academic research to sales was €2.1 million per company – cost savings amounted to €0.49 million per company – it would have taken an additional 2.2 years longer to introduce new products without access to research publications (or €4.8 million per company in lost sales) – in the interviews which were held, the main call was for improved access to research articles, patents, legislative/regulatory and market information, and affordability. As indicated earlier in this chapter, the existing publishing model does not work for SMEs – the content they need is spread widely across too many titles and too many information formats. The world they operate in is not organised by journal and discipline. Though PPV is theoretically a better model, it is proving too expensive for SMEs under current article pricing models. It is claimed that neither the two mainstream business models (toll-based/ subscriptions and Open Access) work for them, and options such as consortia purchasing, extended licensing, specific funding for SMEs, and greater mandated support for open access was suggested by the authors (Swan and Houghton, both of whom are committed Open Access advocates).
Other Studies The Finch Report (RIN, 2012) recommended that SMEs in the UK be investigated to see whether there was much appetite for an extension of Jisc’s national academic licences to include SMEs. Publishers were to be included in the research insofar as there is a commercial issue to be faced – whether such an extension of the academic licence made the sale of such licences viable for publishers. The investigators on this project were due to present their final report in January 2015.
286 Chapter 21
Small and Medium Enterprises
InnoCentive There is a support mechanism for commercial companies seeking answers to research questions through a service called InnoCentive. InnoCentive is a global leader in crowdsourcing innovation problems among the world’s smartest people. They compete to provide ideas and solutions on important business, social, policy, scientific, and technical challenges. There is a network of millions of problem solvers, proven challenge methodology, and cloud-based technology which combine to help transform the economics of innovation. For more than a decade, leading commercial, government, and non-profit organisations such as AARP Foundation, Booz Allen Hamilton, Cleveland Clinic, Eli Lilly & Company, EMC Corporation, NASA, Nature Publishing Group, Procter & Gamble, Syngenta, The Economist, and The Rockefeller Foundation have partnered with InnoCentive to generate new ideas and solve problems faster, more cost effectively, and with less risk than before. Participating companies post “challenges” – scientific problems – which anyone can respond to, with the prospect of earning money from providing a solution. These ‘challenges’ are descriptions of scientific and technical problems that require innovative solutions. Connections made by InnoCentive are between parties who would otherwise only have met accidentally. “The attention of the right expert at the right time is often the single most valuable resource one can have in creative problem solving” (Nielsen, 2011), even if the resource is physically and professionally remote from the problem. There are more than 270,000 people from 175 countries signed up to Innocentive, and ‘prizes’ for more than 200 challenges have been awarded. In effect InnoCentive harnesses the micro-expertise of many different individuals who would not normally come together. This collaboration has been defined by Jon Udell as ‘designed serendipity.’ Designed serendipity is the process whereby the many intractable problems facing a scientist – from large to small – are unlocked through the process of finding the right expert at the right time to help. That person can be anywhere in the world. Collaboration is key to designed serendipity. It is claimed that when we try to resolve a problem on one’s own, most of the ideas go nowhere. But when many people are stimulated to address the problem the interaction increases through the ‘network effect.’ It happens when the number and diversity of participants increases the chances of finding a way through the problem. The problem solving goes ‘critical’. “Once the system goes critical the collaborative process becomes self-sustaining. That jump qualitatively changes how we solve problems, taking us to a new and higher level” (Neilsen, 2011).
Summary
287
InnoCentive is one example of the empowering of a wider audience which is taking place as a result of the confluence of technical and social trends. It opens up the potential for the amateur, the interested bystander and the educated public to become involved in scientific processes which have traditionally been the preserve of the dedicated and highly trained scientists in academia/industry. There is just as much scope for creative solutions to come from the enfranchised knowledge worker sector, and is not just a feature of the academic/research interaction.
Summary As society relies on innovative application of research results to improve the welfare of citizens, stronger links could be expected to be forged between parts of industry and universities. This has been a feature for many years, with universities spinning off projects to become commercial enterprises. Many of these are scientific based, and as such require a continued link back to the research effort from which they were spawned. Any business plan adopted to increase the efficiency in disseminating research results would need to take into account the needs of this UKW community. As described in the Academic Knowledge Worker chapter (chapter 18) there is a large section of the graduate and postgraduate population in the UK leaving academia each year to take up positions within the corporate sector. It is not known what proportion of those leaving universities move into SMEs but it can be assumed that there would be a sizeable number given the ‘long tail’ of corporations, and the many new innovative companies emerging in sectors such as IT, finance, biomedical equipment, etc. Also new companies spin out of academia each year as the university-based research efforts exposes companies to commercial exploitation. These developments create the potential to extend on the scientific research output to a large and innovative audience beyond traditional academic boundaries. In some cases it may be too late: some smaller, dynamic SMEs operating in IT-related areas may already be active users of social media as sources for the STM updates. They may have found that the communication aspects and the information culture surrounding new media more to their liking than having to rely on books, journals and research articles.
Chapter 22 Citizen Scientists Introduction Much interest has arisen as more and more people use the Internet to see how they can capitalise on what has become a major source for information and social interaction for the digital consumer. One aspect of this is how science can be woven into the Internet, and whether it brings greater democracy and involvement by what has hitherto been an unaffiliated group of researchers. We are seeing the emergence of ‘armchair scientists’ or ‘citizen scientists’ as the Internet is used to bring individuals in direct contact with working science. The Wikipedia entry for the term ‘Citizen Science’ states: Citizen Science is a term used for projects or ongoing program of scientific work in which individual volunteers or networks of volunteers, many of whom may have no specific scientific training, perform or manage research-related tasks such as observation, measurement or computation.
Although Citizen Science is not new, what has changed is the emergence of far more online tools that enable more people to participate. These online tools not only make it easier to participate – to share and collaborate on a global scale – but also expand the range of scientific work people can do. The barriers to entry are now low – both in time and in technical resources required.
Citizen Science Scientific research in many disciplines relies increasingly on valuable analyses being done by people outside the laboratory. These include specialists in data analysis, in apparatus development. There is also research data being generated by large computer systems, with the resulting data requiring human analyses beyond the capabilities of computers (particularly relevant in astronomy which requires human classification of galaxy images). It results in new disciplines being created which are conjoints of established disciplines – such as bioinformatics (computers with biology); cheminformatics; astroinformatics, etc. Many of these include individuals outside the mainstream of academic-based research, yet they would benefit from access to relevant research output, and the research area would also benefit from their
Citizen Science
289
individual contributions. Some research areas are of great personal interest to former academics, and they would like to retain some link with ongoing developments in these interest areas. There are a growing number of international research projects which rely for their success on participation from a mass audience. There is, for example, the SDSS or Sloan Digital Sky Survey in which volunteers are given photographs of galaxies and asked to answer questions such as “is this a spiral or elliptical galaxy?” and “if this is a spiral, do the arms rotate clockwise of anticlockwise?” The photographs are taken by a robotic telescope. The volunteers are usually able to classify the galaxies by manual means much better than computers can. The Sloan Digital Sky Server (http://cas.sdss.org/dr5/en/) contains approximately three terabytes of free public data provided by 13 institutions with 500 attributes for each of the 300 million ‘objects’. In effect it is a prototype virtual e-Science laboratory in astronomy. Over the past six years there have been 350 million web hits on the Sky Server and some 930,000 users. It is possible to go to the SDSS online Skyserver and download stunning images of distant galaxies. Anyone can do it, and the site is designed to be used not just by professional astronomers but also citizen scientists and members of the public. Collaborative citizen science projects exist in areas such as monitoring the earth’s atmosphere and the earth’s surface. These are feeding data into a central repository about the earth’s climate. Other examples include collaboration on shaping proteins within DNA, in which 75,000 participate. This has become as much an entertainment (such as computer gaming) as an important set of research findings in biomedicine. In several domains, citizen science has a long history – for example, the Victorian naturalists and areas of ornithology, in meteorology and archaeology, where an emphasis on observational recording is central to scholarship. The eBird service, for example, uses local input of some 2,500 volunteers to monitor the bird population and migrations. The National Audubon Society Christmas Bird Count has taken place annually for over 100 years. These projects are just some of many online citizen science projects that are recruiting volunteers, most of them without serious scientific training, to help solve practical scientific research problems. They are important scientific projects where large groups of volunteers can attack scientific problems beyond the reach of small groups of experts and individual professionals. We are seeing resurgence in citizen science with the social culture of the Web beginning to influence and radically change the way science is performed. Citizen science is evolving, driven by the nature of Science to become more cross disciplinary, with computers collecting and storing data, and where mining of information can be undertaken from a multiplicity of sources. Online tools are
290 Chapter 22
Citizen Scientists
changing the relationship between science and society. This has become apparent in a number of subject areas.
Democratisation of Scientific Publications Science is becoming more open and democratic. As society becomes more educated and professionally trained the old structural and elitist barriers are beginning to access break down. Suggestions for ‘translating’ the high level content to a wider audience open up new opportunities for scientific journalism. One concept advanced by a U.S. consultant (Joseph Esposito, 2007) is ‘Nautilus’ – this has been described earlier in chapter 17, and suggests that high level research information has a widening circle of interest, concentric rings which move out from the centre, each additional ring requiring less specialised description of the research finding. The present publishing system just focuses on the central core of research users within a discipline; not with the growing concentric rings. Esposito offers the scenario that publishing and open journalism could take on not just the primary role of certifying the original research result, but also a tertiary role of interpretation for much wider audiences (see chapter 17). Esposito further believes that the future of communications is based on the infrastructure of consumerism. “This is because in a networked world the number of nodes connected to a network matter [Metcalf’s Law] and the consumer market has the big numbers” (Esposito, 2012a). The issue which needs to be addressed is how to layer academic needs and interests onto the platforms of the consumer market, using such tools as Google, iPhone, FaceBook, Twitter, LinkedIn. New commercial paradigms and consumer-adapted services could allow wider, diverse and more numerous communities to benefit from specialised research results. This has implications on who will be the key players in this industry sector in future and how they deliver information to ‘the long tail’ of newly enfranchised knowledge workers. The benefits are considerable. The value generated from research output would be multiplied within society if more and more people were able to access and make use of it in a variety of different ways. We are looking at scientific communication becoming more ‘democratic’ in its accessibility, and the ‘multiplier’ effect coming into play to support greater ‘reach’ of research results within society.
The Data Web
291
The Data Web Separate citizen science projects take on a new dimension as and when they become linked, and the data resources of all the projects become cross searchable. This is the concept of Data Web, enabling not only existing questions to be answered through the sum of all the content, but also stimulate new questions to be asked. It gives Artificial Intelligence a new meaning. The interaction between Data Webs and Artificial Intelligence is data driven intelligence (Neilsen, 2011) and is growing rapidly. It highlights a new way of finding meaning; one which is different from a manual approach and more in line with capitalising on the strengths of a combined computer and human interaction. However, citizen science itself is not an invention of the Internet. According to Nielsen (Neilsen, 2011), many of the earliest scientists were amateurs. However, the Internet has changed the nature of citizen science by enabling far more people to participate using common data sources and common procedures. In the course of this it has led to the creation of communities of like-minded members. In David Weinberger’s (Weinberger, 2007) the author says “...the change in the infrastructure of knowledge is altering knowledge’s shape and nature. As knowledge becomes networked, the smartest person in the room isn’t the person standing at the front lecturing us, and isn’t the collective wisdom of those in the room. The smartest person in the room is the room itself: the network that joins the people and ideas in the room and connects to those outside it”. Will the growth of the networks which support citizen science transform the whole structure of science and scientific reporting? Or is it just relevant to a few isolated disciplines and projects? And will it grow or is it a fad which has reached its ultimate? Is the attention span of current citizen scientists exhausted? To tackle the last issue first, if one takes one of the more established citizen science projects with 200,000 participants (Galaxy Zoo), using gross estimates, one arrives at an annual full-time equivalent of a team of 250 people. As Americans spend as much as 5 hours per day watching television, this amounts to 500 billion hours of TV watching. This indicates there is considerable scope for citizen science to expand in future if the motivations and talent can be captured and activated. This is the point made by Shirky (2010) as part of his ‘cognitive surplus’ which is feature of an increasingly educated society (see chapter 14). To quote Michael Nielsen (2011) “We’re seeing a great flowering of citizen science”. He goes on to speculate that “will we one day see Nobel Prizes won by huge collaborations dominated by amateurs?” – such is the pace and extent of the collaborations being taken by amateurs in scientific progress.
292 Chapter 22
Citizen Scientists
Amateur Scientist Sectors Although there are many professionally trained astronomers in the world, working in large establishments with access to sophisticated telescopes and supported by well-stocked research libraries, there are many more who work from home or the office trying to keep pace with the latest stellar findings. Not only keeping pace but also in some instances contributing to the world’s astronomical data and knowledge base. In January 2012, BBC TV ran a programme entitled ‘Stargazing’ which commented on the educational experience involving online interaction in astronomy. It attracted an audience of over a million viewers. It showed the extent of interest in this scientific area, a level of interest which is not reflected in the uptake of learned journal subscriptions in astronomy. In a further example in the UK, the BBC LabUK initiative is harnessing community effort in online experiments and is seeking to work with scientists to help solve professional research challenges which are suited to mass participation achieved through this medium. Exemplars such as BBC SpringWatch, eBirdand Bioblitz Bristol, all of which have brought Web and mobile technologies to engage the public in collecting natural history data and monitoring species living close to humans. In 2011, the Zoological Society of London created the Instant WILD app for iPhones, enabling scientists and citizen scientist to monitor some of the most remote wildlife on the planet. Mobile phone users will be able to look for rare animals in their natural habitat at any time. The app works by accessing motionsensitive cameras placed in forest clearings in Kenya, Sri Lanka, and Mongolia. Citizen scientists can help the research process by classifying the animals seen. Interest in other areas such as the environment, global warming, and pollution has fuelled additional demand for access to relevant research results being made openly and easily. World famine, meteorology, global poverty – have also inspired countless individuals to research, investigate and comment about issues in these areas, often without easy access to research results which could underpin their analyses and commentaries. The formation of the Citizen Cyberscience Centre, a collaboration between CERN, UNITAR and UNIGE, is an indicator of the perceived importance of this approach, particularly in international collaboration, for developing countries and for neglected diseases. It is indicative that the move towards a democratised science is not confined to a few select areas – it has a broad base particularly in environmental subject areas. In 2009 a Cambridge-based mathematician, Tim Gowers, created a networked approach to solving some stubborn mathematical problems. He used his blog
Networks of Citizen Scientists
293
to invite readers to help solve a particular challenge which became known as the Polymath Project. After a slow start, the participation from around the world exploded as mathematicians from all levels of expertise joined forces in coming up with a solution. After 37 days of open collaboration, it was claimed that not only the original problem was solved but also a broader and harder problem had also been resolved. This started a whole series of collective problem solving online by more than a hundred mathematicians. Such massively collaborative mathematics has become a powerful new way of attacking difficult mathematical problems. Other large cooperative research projects include the human genome project, completed in 2003, and the HapMap (haplotype map) completed in 2007 which charts how human beings differ in their genetic code. Biologists from around the world upload genetic data from their laboratories to central data services such as GenBank. Citizen science can be a powerful way both to collect and analyse enormous data sets. They can look for the unusual and unexpected. They are not always successful in the stated aims, but when they do succeed they prove to be an asset in the advance of scientific research. Citizen science may work best in areas where there is a need for community of action. It requires common tools and services such as communication, teaching and education, mentorship, etc. Online communities could be built around virtual seminar series and conferences, online question-and-answer sessions, and discussions groups. In this new world the refereed research paper is only of partial relevance. Nor does one have to be an eminent academic to qualify to become part of such a community. Many people are smart, and as long as they have the interest and commitment, and as long as there are tools are available, they can become active citizen scientists.
Networks of Citizen Scientists In a book describing the new ‘wikinomics’ business models, the authors Tapscott and Williams (Tapscott & Williams, 2006) contend that conditions are set for ‘the perfect storm’ to occur in corporate R&D. The interactions which occur between new platforms for social collaboration; the new generation of those who are accustomed to collaborating; a new global economy which supports new forms of economic cooperation; and a new business model which aligns itself more to the world of the Internet than to the book or journal – all have a combined impact on the way research corporations conduct their R&D.
294 Chapter 22
Citizen Scientists
It plays out in a number of ways. At the micro level, it enables many microexpertises to be brought together and create something which is much larger than the sum of its parts. Each individual contributor on their own might not have an established reputation, but they may have one small element of knowledge which far exceeds that of the average professional working in the area. Bringing these individual high-level expertises together creates a solution to a problem which would defeat any one person working on their own. Though individual scientists do have the ‘Eureka’ moment when solving a problem on their own, such chance connections happen infrequently. Many problems are blocked by a problem which is fairly routine, and which can be solved by finding the right expert to help at the right time. In the past this was difficult – with social media it has become easier. For those involved it becomes enjoyable to spend one’s time concentrating on the specific problems where one has a special insight and advantage. This process has become known as ‘designed serendipity’ and was described briefly earlier. For designed serendipity to succeed the modularity of the input into an online project should be controllable. There are several aspects to designed serendipity. One is that, if the project is too large to provide an overall context within which each participant can feel comfortable, it would break up. A modular structure is necessary for any such large project to be successful. In essence we have: – in the community there is a tremendous amount of expertise – each individual’s expertise can represent a small element of the overall problem to be solved – the expertise is however currently latent – social tools enable such latent micro-expertises to be harnessed – these online tools create an ‘architecture of attention’ – collectively this harnessing exceeds the expertise of any one individual expert – a series of modules may be necessary for large social projects to achieve success – each module would have its own harnessing of latent expertises – conversational critical mass would thereby be achieved. As a work flow process it builds on the individual-focused nature of research scientists, and through sharing, using online tools and collaboration produces a more effective collective intelligence. It adds another dimension to the democratisation of the research process, and enables the wider knowledge workers – with their individual expertises – to be embraced within the scientific research process.
Business Models
295
Business Models The need for access to research outputs can as a whole be as strong within the Unaffiliated Knowledge Workers sectors as it is within subscription-paying institutions. The difference is how the research output can be packaged to meet these two different market requirements, and delivered in a way which enables the whole publication system to remain viable. What should be avoided is cannibalisation of the traditional publishing system such that it no longer becomes attractive for publishers to become involved in the dissemination of research results before an alternative business model can become effective. Phrases such as ‘throwing the baby out with the bathwater’, or ‘shutting the stable doors after the horses have bolted’ spring to mind. The onus is therefore on ‘information providers’ (which can include more innovative traditional publishers) to come up with business models that enable those hundreds of thousands or millions of citizen scientists to move from being a latent market for high level scientific information to becoming part of the system. Latency needs to be monetised in a way which is acceptable to all. It is an unattractive feature of existing toll-based publication systems that institutions are paying for articles which are not relevant to their clients/customers. The subscription system demands that they take what is offered with little choice in determining what should be delivered. For the individual citizen scientist this is a huge problem – personal funds cannot be wasted on buying too much published information which has little or no relevance to their career or individual needs. Besides being financially crippling it also means that time – a valuable resource – can be wasted sifting through the unwanted material. This raises scope for business models which are sufficiently granular to enable the interest profile of an individual citizen scientist to be matched against the content of publications coming on stream. Selective Dissemination of Information (SDI) was a method considered decades ago, before sophisticated computer systems were available, to match profiles against metadata of newly published articles. Such a SDI scheme would seem to offer advantages over the current scheme of broadcast publishing. The concept has been ‘reinvented’ in a modern form as RSS (rich site summary) or Alerts run from individual data sites. Document delivery, PPV and walk-in access also offer complementary delivery options, but they are after the event. They come into effect once the search process has discovered items of interest, and then the citizen scientist is faced with financial and administrative barriers preventing ease of access to the full information content. Access in this case would be to an article of record written months/years earlier. An alternative SDI system would be to predict what items
296 Chapter 22
Citizen Scientists
may be relevant to the target audience, based on a profile of interest or on what the individual had shown interest in recent past, and to proactively supply information in advance. It is analogous to the system which Amazon uses to stimulate book sales – to use records of purchase to recommend relevant and related items. To follow the digital trail left by an online researcher, and to match this trail with incoming articles. Pricing of such a schematic would take into account the low price threshold which citizen scientists have in buying into such an SDI system. A pricing structure which is innovative, which builds on the ‘Free’ concept underlying the Internet and web services, would be necessary. Lessons could be learnt from the experiences of pricing of goods and services introduced by successful Internet companies, rather than relying on the traditional practices of scientific journal publishers which have no relevance for this ‘individual’ as opposed to ‘institutional’ audience. Micropayments, PayPal, and other systems come to mind, supported by an advertising process which is targeted to meeting the digital trails left by individuals.
Summary There is no commercial strategy which allows the citizen scientist to buy into a relevant STM information service, one which follows the Internet practice of being as free to end users as possible. There is no clear editorial strategy which enables the translation of high level research results from academia to be made for a mass market of citizen scientists. Even though the mass market is becoming increasingly scientific-literate. There is no administrative strategy which provides the mechanisms for creators of scientific information and the many latent expertises within the citizen science community to interact in a structured, visible and sustainable way. However, ‘Big Science’ indicates a way of bringing research data together with an interested public. The pressure from lobbyists, interest groups, and vocal individuals will exert additional pressure on scientific publishing to change and potentially to include the mass market of UKWs within citizen scientists. New delivery options and new pricing models may be necessary to attract UKWs, including more personalised and customised approaches.
Chapter 23 Learned Societies Introduction Little has been said so far about the role which learned societies could play in effecting a major transformation in scientific communication. Learned societies could sift and channel available information – in whatever format – to a target group whose information profile and information needs they have much better awareness of than either publishers or librarians. The latter are more generic in their approach to information storage and dissemination – learned societies are more specific to a kindred community – much more personalised and customisable. As such, there are some professions which could benefit from their learned society becoming more closely involved in tailoring an information service to their individual and specific needs. Learned societies are organisations that exist to promote specific interests of an academic discipline or profession. Membership to the learned society may be open to all, may require possession of a recognised qualification, or may be an honour conferred by election. Some act as professional bodies, regulating the activities of their members in the public interest or the collective interest of its membership. The formation of a society is an important step in the recognition of a new discipline or sub-discipline or profession. Learned societies are important in the sociology of science.
Learned Societies Learned societies are, by-and-large, not for profit institutions. That does not mean they fail to make surpluses. In 2004, the American Chemical Society, for example, made a surplus of about $40 million on their journals and online databases, from revenues of $340 million. That is less than Elsevier as a percentage of revenues, with Elsevier’s profits being $1,100 million on revenues of $3,200 million (2009). But ACS nonetheless makes a sound return on what is essentially a non-profit institution. Societies can be general in nature, such as the American Association for the Advancement of Science (AAAS), specific to a given discipline, such as the Modern Language Association, or specific to a given area of study, such as the American Association of Professors of Yiddish. They can also operate in a particular country, or are international, such as the International Federation of Library Associations (IFLA) or the Regional Studies Association, in which case they often
298 Chapter 23
Learned Societies
have national or regional branches. The number of society members can vary widely from a few specialists to those which include several thousand members. A list of the some 17,000 U.K.-based learned societies and trade associations is published in The CBD Directory of British Associations and Associations (CBD, 2009), together with addresses, contact details and in many cases the number of individual members in the U.K. (CBD, 2009). A selection of a few such U.K. learned societies is given in Tab. 23.1. It is not a comprehensive list of scientificrelated societies, merely an indication of those which might support a wider network of potential users of scientific material. Tab. .: U.K. Learned Societies and their membership numbers (U.K. membership only) Association of Chartered Certified Accountants Association of Optometrists Authors Licensing and Collecting Society Biochemical Society British Association for Print and Communication British Astronomical Association British Medical Association British Psychological Society British Trust for Ornithology Chartered Institute of Library & Information Professionals Council of British Archaeology Dental Practitioners Association Diabetes U.K. Energy Institute Federation of Small Businesses Forensic Science Society Friends of the Earth Geographical Association Historical Association Institute of Biology Institute of Biomedical Science Institute of Ecology & Environmental Management Institute of Food Science and Technology Institute of Marine Engineering, Science & Technology Institute of Mathematics & its Applications Institute of Chemical Engineers Institution of Civil Engineers Institution of Engineering and Technology Institution of Structural Engineers Linnean Society of London Marine Conservation Society Mathematical Association
, , , , , , , , , , , , , , , (companies) , , , , , , , , , , , , , , , , ,
Learned Societies
299
Tab. 23.1: (continued) Mineralogical Society of GB & Ireland Motor Neurone Disease Association National Union of Teachers Nautical Institute Network of Government Library and Info Specialists Operational Research Society Patient Information Forum PERA Permaculture Association (Britain) Philological Society Physiological Society Publishers Licensing Society Royal Aeronautical Society Royal Agricultural Society of England Royal Anthropological Institute of GB & Ireland Royal Archaeological Institute Royal Astronomical Society Royal College of General Practitioners Royal College of Obstetricians and Gynaecologists Royal College of Ophthalmologists Royal Horticultural Society Royal Institute of British Architects The Royal Society Royal Society of Chemistry Royal Society of Edinburgh Royal Society of Medicine Scientific Exploration Society Ltd Scientists for Global Responsibility Scoliosis Association Scottish Engineering Scottish Motor Neurone Disease Association Scottish Optoelectronics Association Society of Applied Microbiology Society of Authors Society of Automotive Engineers Society of British Neurological Surgeons Society for Endocrinology Society for Experimental Biology Society of Food Hygiene and Technology Society for General Microbiology Society for Popular Astronomy U.K. Clinical Pharmacy Association U.K. Serials Group Writers’ Guild of GB
, , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
300 Chapter 23
Learned Societies
Journals and Professional Bodies Not all new journal titles originate from a learned society – more and more titles are created by commercial and independent publishers outside the context of a professional society to meet the emerging ‘twigging’ trends in science and the growing multi-disciplinarity of research areas. This tends to diminish the significance of any one individual professional society as a STM publishing source. However, in aggregate they still have editorial influence. The quoted membership numbers of the above learned societies are just for the U.K., and taking a rough average from across the sum of associations listed above gives 24,000 members per association. Multiply this figure by 10–15 times to get rough global estimates of 300,000 global members per average association, and the distinction between current sales levels of journals (averages between 800 and 1,000) as compared with the potential readers among learned society members (hundreds of thousands worldwide) becomes apparent. The ‘long tail’ on this tenuous basis, would be much in evidence. There is an association which acts on behalf of the publishing interests of learned and professional societies, currently based in the U.K. but with global aspirations. This is the Association of Learned and Professional Societies, or ALPSP. Some of the members have their own research publishing units, other support publishing activities handled on their behalf by commercial publishers. As examples of this diversity, Tab. 23.2 gives membership details and an indication of who was responsible for publishing their scientific journals. Tab. .: Publishing activities of ALPSP members (/) Learned society
Number of members
Publishing agency or partner
Association of Applied Biologists Biochemical Society
members , members , members ? , members , members , members
Wiley HighWire Press
BMJ Publishing Group British Ecological Society British Pharmacological Society British Psychological Society British Society for Immunology
BMJ Publishing Ltd Wiley Open British Pharmacological Society Wiley Online Wiley
Journals and Professional Bodies
301
Tab. 23.2: (continued) Learned society
Number of members
Publishing agency or partner
British Society for Rheumatology Company of Biologists Geological Society Hydrographic Society ICE Publishing
members ? ? members , members , members , members ?
Oxford University Press Company of Biologists Geological Society Publishing House Institute of Chemical Engineering/ Thomas Telford ImarEST
?
?
, members members ? ?
Oxford Digital Archive
Institute of Marine Engineering S&T Institute of Mathematics and its Applications Institute of Physics and Engineering in Medicine International Food Information Service London Mathematical Society Mineralogical Society Nutrition Society Pharmaceutical Society Physiological Society Royal Astronomical Society Royal College of General Practitioners Royal College of Nursing Royal College of Obstetrics & Gynaecology Royal College of Physicians Royal College of Psychiatrists Royal College of Radiologists Royal College of Surgeons of England Royal Geographical Society Royal Society
, members , members , members ? , members , members , members , members ? ? , members
Oxford University Press IoPP/Elsevier/Informa
Mineralogical Society Cambridge University Press PJ Press/Royal Pharmaceutical Society Wiley Wiley RCGP/World Wide Subscription Service BMJ Wiley Ingenta Maney Publishing Royal College of Radiologists Ingenta Royal Geographical Society HighWire Press
(continued)
302 Chapter 23
Learned Societies
Tab. 23.2: (continued) Learned society
Number of members
Publishing agency or partner
Royal Society of Chemistry
, members , members , members , members , members , members , members ?
Royal Society Chemistry Publis+ hing Royal Society Edinburgh Scotland Foundation Royal Society Medicine Press
members ? members , members ?
? Society of Biology Society of Indexers ?
?
?
, members ?
Institute of Engineering and Technology Wiley
Royal Society of Edinburgh Royal Society of Medicine Press Ltd Society for Editors and Proofreaders Society for Endocrinology Society for Experimental Biology Society for General Microbiology Society for Advancement of Management Studies Society for Underwater Technology Society of Biology Society of Indexers The Energy Institute The Gemmological Association of Great Britain The Institute and Faculty of Actuaries The Institution of Engineering and Technology Zoological Society of Lonon
Society of Editors and Proofreaders Portland Press (Wiley) Oxford Journals ? Wiley
?
Prior to the arrival of Robert Maxwell and the growth of Pergamon Press in the 1950s, a conventional way for societies to support publications was to require that their members paid for them out of their annual membership fees. One of the benefits of membership was a ‘free’ subscription to the society’s publications. These could be newsletters, magazines, learned journals, conference attendances and reports. Individuals or organisations which were not part of the society could still get access to such publications, but they would have to subscribe to it and pay a higher subscription rate. The business relationship between professional associations and commercial publishers is based on economies of scale which are offered by larger, more focused commercial operations. The economies are in the form of more effective
EDP Study on Learned Societies and Open Access
303
technical platforms for content to be delivered; more technical bells and whistles in the type of service offered to end users; greater global marketing and distribution arrangements particularly in rapidly developing far east markets; and the avoidance of internal bureaucracy which could lead to the learned association taking its eye off the main mission of the organisation. Pergamon Press was a keen advocate in promoting a strong relationship between the commercial publishing sector and learned societies and their editors. This model was also adopted by other commercial publishers such as Blackwell Scientific, Springer, Elsevier, etc.
EDP Study on Learned Societies and Open Access During 2013/14, EDP Open commissioned TBI Communications to undertake a study among learned societies in the US and U.K. to establish perceptions of Open Access publishing in particular. (Open Access is discussed in more detail in chapter 24). 33 responded to the survey (TBI Communications, 2014). Some of the key issues which emerged out of the study included: – Only half the societies responding were strongly positive about Open Access. However there was a small proportion which was strongly negative. – Societies overwhelmingly agreed that Open Access will inevitably put some learned societies’ journals in financial jeopardy. This was particularly the feeling among those societies based outside the U.K. – Related to this, by far the most significant challenge facing adopters of Open Access is the ability to maintain revenues from existing publications. – Appropriate responses and strategies were ill-defined among two-thirds of the respondents – Competing with specialist Open Access publishers was considered a challenge – Other concerns related to the reputation and role of publishers, freedom of choice, integrity, lowering of quality, funding, access and mandating (TBI, 2014). A key opportunity for societies in adopting Open Access was to enable researchers in poor and developing countries to have access to research information. Also, half the societies felt there was an opportunity for increasing interdisciplinary access to research information and research impact. It was also felt that societies had many opportunities relating to their role in advocacy and member communications. These could be enhanced if there were greater collaboration between societies such as in pooling resources and sharing complex
304 Chapter 23
Learned Societies
tasks. It was also believed that societies could be agile and start new journals in new and emerging disciplines. They could do this through giving the stamp of quality and authority on research outputs which learned societies can provide through their corporate image and respected social missions. The majority of societies were offering Open Access via the Green route in repositories (see chapter 24). Half the societies simply use delayed Open Access whereby there is subscription access first and free access after an embargo period. Hybrid (by article) OA was offered by over three-quarters of societies. Gold OA was offered by few societies. When it came to supporting the payment of article processing charges (APCs), or page charges, two-thirds of societies had no system in place. Further evidence of the confusion that societies are feeling in the OA landscape related to keeping members updated on society-related events whilst simultaneously wanting to charge for research-related literature. Two-thirds of all societies were also looking for support on the best approach to OA, and compliance with funder mandates. They were hampered by lack of access to good information to help them make appropriate decisions (TBI Communications, 2014). One way forward would be to commercialise their publishing-related activities, and offer these services to other (smaller) learned societies, usually for a fee. The Society of Endocrinology, for example, has set up a separate company – Bioscientifica.com – which is the Endocrinology society’s commercial arm and aims to make a profitable return on its investment in its publishing system. Though this balancing act – undertaking a social mission whilst also achieving commercial returns – may be difficult to achieve, the situation is reconciled insofar as the profits which Bioscientifica bring in are for the benefit of the society and not for external investors. According to a representative from Bioscientifica, the society is like many others in the U.K., too small to make much of an impact on the publishing industry on its own. There is strength which comes from scale, and each society on its own lacks such scale. However, as part of a consortium the economies of scale can be achieved. Bioscientifica has joined forces with other learned societies and university presses which use the same technical hardware platform developed by HighWire at Stanford University in the U.S.A. The group is the Independent Publishing Group, which has in its membership some 5–6 U.K. learned societies within the group’s 23 publishers on the HighWire service. There is not only technical support provided to these, but also common sales and marketing support. Other similar collaborative efforts among the learned societies include BioOne and JSTOR, and the Association of Learned and Professional Society Publishers (ALPSP) has also created its own version of the ‘Big Deal’ to compete with commercial publishers.
Learned Society Robustness
305
So far learned societies in the U.K. have focused on serving its members in the U.K. These new collaborative groupings would give learned societies strength in reaching out to wider global markets, and even more so if communities and groups of users outside the traditional scope of the society to be brought within such a collaborative system. Opening up learned societies to such a wider audience participation ties in with the main premise of this book. The key to success for societies in future is to get closer to their users, and to adapt their publication programme to the specific needs of their community. This needs to be done before commercial publishers disengage from their concentration on institutional research budgets and base their business plans on extracting funds from authors through the Gold Open Access movement. However, as commented by Bioscientifica’s representative, it is often difficult to get society members to respond to questionnaires about their wishes and intentions. Officers within the society are convinced that they know what the market wants, both in the clinical side as well as the scientific side. This was evidenced during meetings attended by the author with several other learned societies. In this respect the attitudes of smaller learned societies in particular are no different from those which existed in the pre-digital era. Nevertheless, as societies such as Endocrinology expand their portfolio of products and services to include – besides the traditional journals, e-journals and books – such services as conferences, meetings, blogs, association management, patents and standards, reports, ‘mash-ups’, etc., the scope for a more innovative and comprehensive approach to scientific information may well lie within collaboration among learned societies in future.
Learned Society Robustness In a study conducted by Christine Baldwin in 2004 for ALPSP, 154 learned society and professional association publishers around the world were contacted by questionnaire, and 68 of these provided responses (Baldwin, 2004). They were almost equally divided between those who did their own publishing, and those who contracted out their publishing to a third party; some did both. Not all learned societies which responded made a surplus from their publishing – approximately one-third said that they did not. Of those self-publishers which did make a surplus, the median surplus was just 15% of overall subscription revenue. Self-publishers also reported that publishing surpluses represented a median 20% of the total society revenues; those who contracted out reported a higher median figure, 30%, although somewhat surprisingly, some
306 Chapter 23
Learned Societies
were receiving nothing from their publishing partner. The purposes to which the money was applied could be classified in three distinct groups: – public education in general – support for the subject community as a whole (keeping conference fees low, providing bursaries for attendance at the organisation’s own and other meetings, offering research grants) – support for the society brand and its membership in particular (providing free or reduced-price copies to members, keeping membership dues low, and generally supporting the running costs of the organisation). While responses about how much of the surplus was applied to different areas of activity may not be comparable, the figures were at least indicative. The percentage of the respondents who applied at least some money to each of the areas was a more meaningful finding. They ranked as follows: – Subsidy of members’ copies of the journal (96% of respondents did this). By ‘surplus’ is meant the amount of journal subscription and other revenue that, after covering costs related to publishing the journal, the society or association is able to use for its other activities. – Supporting the organisation in general (82%; of those who did, median 60% of surpluses was applied to this) – Reinvestment in the publishing business in particular (42%; 30%) – Subsidy of conference fees (33%; 7%) – Subsidy of membership dues (32%; 15%) – Provision of bursaries (26%; 7.5%) – Public education (26%; 7.5%) – Reinvestment in the organisation’s reserves/endowments (25%; 17.5%) – Provision of research grants (21%; 25%) – Other (21%; 25%). According to Baldwin, if publishing surpluses were to be reduced (for example, by the ongoing process of being ‘squeezed out’ by larger publishers’ ‘Big Deals,’ or by a change of business model in response to market pressure), there would be a number of consequences: – the members themselves would suffer (they would pay more for membership, more for their copies of the journal) – meetings and conferences which support the discipline as a whole would suffer (higher prices, fewer bursaries) – research would suffer (fewer grants) – societies themselves would suffer (less contribution to administrative costs, less contribution to reserves and endowments for future work)
Learned Society Robustness
– –
307
society publishing would be badly affected (less reinvestment) the public would suffer (less public education, patient support and the like).
Whether relying on library’s collection development budgets (both in academia and industryl) is the best way of serving these interests is questionable. The survey indicated that learned societies have a role to play which differs from the mainstream commercial publishers in that they often provide an accreditation service for their profession or discipline, monitor practices and impose discipline when required. They would be the natural partners in any attempt to extend the reach of scientific publishing into a broader arena. They combine knowledge of the subject area as well as having a mission to make it more publicly relevant and aware. Using their subject niche as a catalyst for a new approach to reaching out to the ‘long tail’ within their society membership requires a new and innovative approach. In a survey of Small and Medium sized Enterprises (SMEs) undertaken by Ware on behalf of the Publishing Research Consortium (Ware, 2009), the author confirmed the role which learned societies could play to help those working in small innovative organisations gain access to published literature. His view was that: Many professional bodies [such as learned societies] have libraries or information services that offer access to information for their members. This can be strongly valued by [society] members because it can be both cost effective and highly targeted to their specific information needs. Such libraries report difficulties in expanding their services online [to their members] because [of] budgetary or licensing constraints but if these could be overcome this could offer an attractive option for many professionals.
This is where the disenfranchised or non-affiliated audience comes in – if there are enough members of the profession who currently are unable to gain easy access to research outputs which may help them in the professional practices, then the scope for changing budget constraints seems high. And from the numbers of U.K. society members listed above there appears the strong possibility for the ‘long tail’ to be as relevant as the core market in generating revenues. All this presupposes that segmentation of the market between the core and the ‘long tail’ can be achieved, and different pricing algorithms applied which do not result in the core market being cannibalised. These are strategic and visionary issues and should not be clouded by current subscription-based practices geared to an institutionalised library market. Business models are described in chapter 24.
308 Chapter 23
Learned Societies
Libraries within Learned Societies Do learned societies currently provide wide-scale information support for their (professional) members? According to an unpublished report or grey literature made available by Cathy Linacre in June 2009 entitled “Survey of library services in U.K. professional bodies” (Linacre, 2009), there is considerable activity among a number of learned society libraries. Ms Linacre visited 15 library services at professional institutions such as the British Medical Association, the Institution of Mechanical Engineers, Royal Institution of Chartered Surveyors, and the Law Society. Memberships ranged from 10,000 to over 100,000 per society. She interviewed their head of service or service manager to see what information services are available now – and how they are resourced – and where they either expect to be, or saw their services in five years’ time. The survey identified that the budget for information provision amounted to around £5 per head of membership per annum (with a range of from £3 to £10). Approximately 60% of this budget was weighted towards staff costs, the rest on acquisitions, although again there was variation in the proportions between different libraries. They had on average nine staff. They perform the same supports services (cataloguing, loans, photocopying, enquiries, etc.) which can be found in other research libraries. Linacre also found that the typical library offered a reading room which received annual visits equivalent to 5% of total society membership. There was an online catalogue which was open to members and non-members alike, though not usually cross-searchable with other external resources. Where statistics were available, annual visits to the online catalogue were equivalent to 43% of the total membership. There was an intention to move more fully online among the libraries by adopting a single sign-on to a greater range of resources. Constrained budgets, and difficulties getting licensed agreements with publishers, were obstacles in achieving this. As a way forward, developing an appropriate information support service incorporating the full range of digital formats and information services, could be a strategically important direction for learned societies to take given the new potentials being opened up. However this is based on a small sample, and of the larger professional bodies only, which makes it difficult to recommend that all learned societies should adopt the same structures. However, if the numbers add up – if the sizing of the ‘long tail’ in each instance gives scope for creating hubs or portals (for example) delivering extra value-added services at a viable and acceptable price – then learned societies could perform a valuable social mission by ensuring scientific communication is spread out widely within society.
Information Hubs and Portals
309
The claim could be made that such professional bodies have not been successful in developing such ventures in the past and therefore it would not succeed in future. However this ignores the changing market, the ‘perfect storm,’ and the ‘long tail’ of currently unaffiliated users.
Information Hubs and Portals In the ARL/Ithica report on “Current models of digital communication” (ARL, 2008), alternative information-disseminating options available to scholars were explored (see chapter 17 on Future Communication Trends). The study identified eight alternatives to the book and journal options now available to researchers as a result of the Internet and electronic publishing developments. The authors of the report describe one set of ‘alternative’ publishing options to the journal system which fits in with the mission of learned societies. These are what the authors referred to as “hubs.” In effect they are digital portals which offer a range of information services targeted at the needs of society members. These can include access to e-journals, reviews, e-prints, conference papers, grey literature, theses, blogs and/or newsletters – all with the aim of consolidating relevant information targeted at like-minded researchers across the spectrum of the learned society’s information needs. The common factor is that the integrated information package would be relevant for the specific members of a particular learned society. Decisions on what should be included in the hub or portal would lie with a ‘gatekeeper’ within the learned society, a maven who would be fully aware of all the issues which the learned society could face in trying to project a modern face for its members. Examples of a ‘onestop shop’, hub or portal include IBMS BoneKEy, a web portal of the International Bone and Mineral Society, and also Information for Practice which is directed towards social work and practice. Another portal is the Alzheimer Research Forum. The ARL/Ithica study suggests that such services could become popular within the scientific/technical/medical community. However, they are costly to create and maintain, and a variety of business models would need to be explored to ensure viability. Advertising and corporate sponsorship may be involved but the larger societies may also need to divert some of the society’s membership dues towards creating and maintaining their own membership portal.
310 Chapter 23
Learned Societies
The Jisc view Jisc co-funded a number of studies which reported on the benefits of Open Access to the private sector (including Jisc, 2009a; 2009b). Jisc has an agenda in this respect – it is convinced that Open Access is the solution to the current scientific communication problem and has invested public funds in pursuing this view. The Jisc programme director on Digital Structures, Neil Jacobs, points to the results of studies such as Houghton’s (2009) who quantified the financial benefits from Open Access, but he also added that there are intangible benefits arising from increased openness in the area of innovation. There are issues such as shorter time to market, improved decision-making, better risk management, new product research and development, avoiding wasted duplication of effort. All these are difficult to quantify but add to the value which an Open Access system would confer to U.K. society. In addition, Jisc points out that U.K’s voluntary and charitable sector is also a valuable asset to the country. There are, according to Jisc, 163,800 general charities employing 765,000 people and 19,800 volunteers. Charities also face barriers in accessing needed research. 80% of the contacts made highlighted the cost barrier, and 46% the lack of time barrier. Content Complete Ltd, now part of Jisc Collections, undertook a study a few years ago for a group of London societies on ways to give members remote e-access to the journals subscribed to by the societies’ libraries. The idea was that a price would be negotiated with publishers that would allow society members to get access to a web page at the society, which would link through to the full text articles at the publishers’ sites (after authentication took place). Some large commercial publishers were allegedly sympathetic to the idea.
The Finch Report The role of learned societies as providers of scientific journal services was referred to in the Finch report (RIN, 2012). Under a section entitled ‘What needs to be done’, it was proposed to ‘Keep under review the position of learned societies that rely on publishing revenues to fund their core activities, the speed with which they can change their publishing business models, and the impact on the services they provide to the U.K. research community’ (RIN, 2012: 8). It could also be added that their role should also be considered within a broader context, as supporters of the non-U.K. academic sector in gaining access to scientific literature.
Trends
311
Trends The increasing reliance by learned societies on marketing and commercial structures offered by commercial journal publishers may appear to gainsay that there is a growing role for learned societies, for them to become more dominant in meeting the needs of researchers. However, the changes which are taking place at technological and social levels – the ‘perfect storm’ – offer new opportunities for learned societies which were denied to them under restrictive conditions set by commercial publishers, inherited from a printed era and accepted by the institutionalised buying market (research libraries). Only professional societies can assemble comprehensive databases of marketing information on people working in the field, for example, which is a necessary first step in building any kind of direct marketing service. And only the professional societies have their own field of study as their exclusive focus. Marketing to libraries is a shotgun approach, but digital media targeting the unaffiliated in the professions lends itself to a more efficient rifle shot strategy. Once the ‘long tail’ of the unaffiliated professional worker becomes part of the mainstream publishing market then the economies of scale, which have long favoured large international commercial publishers, are equally available to learned societies. In many cases, where the ‘long tail’ can be assimilated quickly and easily, targeted information hubs which include a variety of information services, can move in to usurp some of the roles currently vested in the STM journal. Who is best able to know what these membership hubs should include? The role of social media in keeping society members up-to-date could be as important as the formal published journal literature. It may take the form of e-magazines, of news updates, of information about conferences relevant to the scope of the society, etc. It could also include provision of access to the society’s web site for members (and non-members). Information could be disseminated through popular social media sites such as ResearchGate, LinkedIn, Twitter, even FaceBook. These latter may face less of a cultural barrier as scientific information disseminators than would be the case facing core research sectors within academia. Particularly as the professional audience would benefit most from the ‘time switch’ which the revolution in mobile phone technology has created, enabling members of the profession to choose the time and place for searching items of interest which best suit their particular career, business or personal circumstances. A portal or hub approach may be the optimal option forward for learned societies given their USPs (unique service provision). This could be sup-
312 Chapter 23
Learned Societies
plemented by greater customisations and personalisation of information based on the SDI/RSS and Alerts concepts. So far learned societies have suffered under the traditional print paradigm of being too small and parochial – they lacked the resources to provide a global sales outreach for their publications, being hampered by lack of professional staff in areas such as IT and marketing. Commercial publishers were able to use their muscle in these areas to take over the publishing programme of small, specialised societies. In the Internet era, the competitive advantages which large commercial publishers enjoyed are reduced – economies of scale are no longer so critical in areas such as sales, marketing, new product development and technology. This no longer puts learned societies at a disadvantage with their larger commercial cousins in establishing and maintaining effective STM information programmes – the advantage of scale is diminished. Where focus on learned societies becomes important is in the strength of their corporate mission and unique branding. Unlike commercial publishers, learned societies plough back the net income and profits into the society, to improve the welfare of its members in a variety of areas. Commercial publishing involves the diversion of profits away from the science budget into stockholders and venture capitalists’ pockets. The current financial problems facing the scientific communication industry may present some unexpected opportunities for learned societies. Whilst the reduction in spending power by libraries creates tension: ……this may be the time for developing renewed membership benefits [by learned societies], including new publications and publishing services that are offered exclusively to members. In some respects, societies may wish to look beyond institutional markets and concentrate instead on marketing directly to researchers in the field, a library bypass strategy (Presentation by Joe Esposito to Oxford University Press Delegates in New York on July 12, 2012.)
An added element in this dynamic is the emergence of Open Access as a business model being supported by major research funding agencies through mandates that they impose on researchers and authors on where their research output should be made available. Open Access not only offers support for members of the professional society, who may already have access to the society’s publications through membership, but for non-members, including people working in adjacent fields and undertaking cross-disciplinary work. However, it exacts a financial toll on the society publishing programmes as they would be unable to recoup their investment in their publication services. This is not the aim of the
Implications for U.K.Ws
313
funding agencies, to undermine learned societies; it could however be an unintended consequence. So the context we work in today has the professional society surrounded on at least three sides: by libraries, funding organizations, and commercial publishers. The challenge is to find a strategy that reasserts the role of professionals working together on common topics: This marks the real end of the traditional toll-based publishing paradigm – not Open Access, whose benefits are largely to people outside the field, but rather the emergence of direct-to-consumer marketing, which serves to strengthen ties among societies and their membership and places the prerogatives of the researchers in a particular discipline above all others (Esposito, 2012a).
Implications for U.K.Ws Professional learned societies could galvanise a new approach to scientific publishing for unaffiliated knowledge workers. They could provide the structure which would give credibility to a new approach on STM information. This approach would be focused on learned societies in the U.K. It would embrace a wider inclusion of information resources within a portal or hub, and not just rely on revenues from the society’s learned journal. It would include the emerging information services and interactive communication services flourishing in social media. It would be a multimedia approach. It could be data driven as much as textual in content. Credibility would lie in the learned society’s mission to support the needs of a targeted community rather than the private investment sector (which sustains the main commercial journal publishers). The programme would need a coordinator or information moderator to ensure that quality and relevant information from all sources be included within the portal. Dissemination would be through matching profiles of interest of society members against the incoming new information, to reduce the ‘white noise’ of Internet generated information as much as possible and reduce the time wastage facing individuals who would otherwise have to select the wheat from the chaff. Modern delivery options, such as inclusion of mobile devices with updates, could also become a growing feature of a modern learned society publication system. This approach challenges early conceptions of appropriate business models, and new more granular approaches are emerging. No longer is the journal the key. Instead, other media which also includes the research article where
314 Chapter 23
Learned Societies
appropriate, and its supporting raw data, or even a particular information ‘nugget’ within an article, become valuable constituents of a broader information service. Combinations of the above programmes would enable the ‘long tail’ of UKW’s having professional credentials being brought within the STM system. The following chapter looks at the role of Business Models in general and how they have conditioned the market for STM material in recent decades. Following on from this overview, Open Access as a specific and important business model will be described as this could be another principal tool for enabling UKWs and their professional learned societies to be embraced within the STM research system.
Chapter 24 Business Models Introduction Business models, notably pricing models, constitute a major barrier to knowledge workers in general trying to access published scientific information. Much depends on whether the business model acts in favour of protecting traditional stakeholders’ revenue streams, or whether a more open approach to price setting is adopted in which costs are lost somewhere deep in the science research system. Irrespective of the approach adopted, there is a cost in creating and monitoring scientific information. There is no such thing as a ‘free lunch,’ even though some Open Access and ‘freemium’ advocates might claim that there is. Business models are set differently by the main players in STM. This chapter looks at the key business and pricing strategies currently being operated.
Publisher Initiated Business Models Serial Subscription and Licensing Model The business model for publication of research articles and conference proceedings has evolved over three and a half centuries. The journal became the basic pricing unit. Packaging related articles into a single journal meant that many topic-based journals have grown in size over the years in parallel with the annual growth of scientific literature in the subject area, estimated at 3–3.5% per annum overall for STM information (Ware & Mabe, 2009a; 2012). This resulted in popular journals seeing their subscription prices increasing rapidly at rates which some parts of the market felt were unsustainable. Personal subscribers were early sufferers as individuals could no longer afford the subscription, and scientific journals became essentially an institutional sale (Tenopir & King, 2000). This has been described in detail in an earlier chapter on the ‘serials crisis’ (chapter 7). As a sustainable business model, journal subscriptions have come to depend on there being a robust and growing library market, in tandem with the growth of research output. During the halcyon days when science funding was plentiful, in the 1960’s and early 1970’s, this dependency was not a significant weakness. However, since then academic library budgets have failed to maintain their share of total university budgets. The decline in the library’s share of the total budget fell from 3.8% in 1978 to 3.4% in 2008 in the U.K.
316 Chapter 24
Business Models
The growing output of scientific material in the form of research publications, as compared with a static or in some cases declining institutional library budget or market size, meant that prices began to rise faster than other industry price indices – not only to cope with the additional output but also to compensate for the declining sales. A price spiral developed. Data provided by the Association of Research Libraries and the National Science Foundation have highlighted the increasing gap between available library budgets for serials and the prices being set (see Chapter 7). In 2013, four academics from the University of Leicester’s School of Management, tried to bring the debate about steadily increasing prices of publications into a journal published by Taylor and Francis, only to have the article concerned censored by the publisher. This led to the editorial board threatening to resign unless their concerns were reinstated. The debate was to appear in the journal Prometheus: Critical Studies in Innovation. Its thrust – ‘Publisher, be damned! from price gouging to the open road’ – criticised the large profits made by commercial publishers on the back of academics’ labours, and the failure of the Finch report on Open Access to address them adequately (see chapters 25 and 26). The article compared publishers with the music industry, noting that the latter saw surging sales once it began reducing its prices. It also suggests that less strenuous infringement countermeasures could help push prices down. But most damningly, it examined the rates charged by publishers, comparing those of for-profit entities with those of non-profits. Bergstrom and Bergstrom (Bergstrom & Bergstrom, 2004) had suggested that a journal page published by a for-profit publisher is between three and five times more expensive than one published by a not-for-profit publisher. Widely-cited journals are perceived to be higher quality, which allows forprofit publishers to charge higher prices for such journals. If widely-cited and more highly-priced journals also enjoy higher circulation (because they are widely-cited), then publishers also benefit through lower average production costs (McCabe, 2004; Dewatripont et al., 2006). The more widely-cited a journal is, the more likely it is to be published on a larger-scale (reducing costs), and the more desirable it is as a destination for submissions (lowering costs yet again). With this desirability comes the opportunity for increasing prices. The result has been the ‘serials crisis’.
‘Big Deals’ Led by pioneering work from Academic Press in the U.K., supported by HEFCE’s then director of policy (Dr. Bahram Bekhradnia), a solution to the serials crisis began to emerge in the late 1990s through the provision of ‘Big Deals’ by leading publishers. For a price similar to the investment in a few of a publisher’s titles,
Publisher Initiated Business Models
317
an agreement was reached to deliver a much wider package, in some cases covering all the publisher’s titles in all its subject areas. These ‘Big Deals’ helped reduce the impact of the individual journal price increase by spreading the library’s budget over a greater amount of published material. For librarians it appeared they were getting more byte for their buck, even if more ‘noise’ was creeping into their collection of journal titles. More recently the ‘Big Deals’ have come under scrutiny insofar as they take much of the collection development expertise out of the hands of professionals within libraries. To quote a recent listserv from the late Fred Friend (an independent consultant formerly attached to Jisc and UCL) “it soon became clear that the sirens [in favour of ‘Big Deals’] were leading libraries onto the rocks of distorted collection policies.” Librarians were no longer in control of the collection development most appropriate for their local users. Scientific publishers have been accused of using bundling deals in order to sell subscriptions for smaller or poor-quality journals that might otherwise be forced to close. However publishers defended ‘Big Deals’ on the basis that they had led to easier access to a greater amount of material. The Chief Executive Officer at Springer Science and Business, Derk Haank, was quoted in Information Today (January 2011) as follows: “I don’t believe there is a structural problem, and things will not fall apart. There are always countervailing forces. I don’t believe that our pricing is a big problem, and I am sure that this market can carry on indefinitely” (Haank, 2011). However, this has not stopped financial analysts (such as Claudio Aspesi of Sanford Bernstein) from claiming that whilst the ‘Big Deal’ may have worked well as a solution for over a decade we can expect research libraries to start cancelling their contracts – a development that will “lead to revenue and earnings decline” for publishers (Poynder, 2011a). This has been echoed by Rick Anderson, Dean of Libraries at Utah University in the U.S.A. (Anderson, 2013b). There is also a more insidious side to ‘Big Deals.’ Because publishers insist on their library customers not divulging financial details of their ‘Big Deal’ arrangements, similar libraries are often paying vastly different amounts for the same package of journals. “[S]ome universities are paying nearly twice what universities of seemingly similar size and research output pay for access to the very same journals” (Bohannon, 2014). Publishers require that institutions sign nondisclosure agreements, “partly to limit the bargaining power of buyers and partly to hide the results of this unequal bargaining power,” according to Peter Suber, Director of the Office for Scholarly Communication at Harvard University. To unearth some of the comparative ‘Big Deal’ terms, a team of U.S. economists contacted university librarians across the U.S.A.; half of them willingly shared information about their bundled subscriptions. To get information on the other half required Freedom of Information Act (FOIA) requests for copies of
318 Chapter 24
Business Models
journal contracts with state-funded institutions. The process was not unchallenged. Elsevier unsuccessfully sued Washington State University in Pullman, claiming that such deals are “trade secrets” in hopes of blocking the release of subscription information. In the end, the researchers gathered price data across institutions for 2,009 deals with Elsevier, Springer, Wiley, and other publishers, and found some inconsistencies. The similarly sized University of Wisconsin, Madison, and University of Michigan, Ann Arbor, for example, paid Elsevier $1.22 million and $2.16 million, respectively, for essentially the same bundle of journals. The University of Texas, Austin, paid $482,000 for Springer journals, while the University of Miami in Florida, which has far fewer PhD students, paid more ($554,000). The smaller University of Oklahoma paid more than twice as much ($500,700) as the University of Missouri, Columbia, ($233,700) for access to Wiley journals. “We realise that a simple linear equation involving enrolment and PhDs does not fully explain the value of journals to universities,” explained Theodore Bergstrom, co-author of the study and an economist at the University of California, Santa Barbara. But price transparency is “just one step that has to be taken if there is to be any chance of revolutionary change” regarding journal prices. A more generic issue is that the ‘Big Deal’ has become a ‘big villain’. There have been exchanges on listservs about the impact of journal prices and ‘Big Deals’ (February 2011). One in particular highlights the issues facing librarians. Diane Grover, Electronic Resources Coordinator at the University of Washington Libraries, posted on [email protected] on 4 February 2011: The big deals have been priced based on historical subscriptions. If we still were able to maintain our own subscription lists, we would have been cancelling journals that are included in bundles for the last several years. But the bundle deals insist on zero to minimal cancellations, percentage increases, and even require adding titles the publishers start up or take over from others. If costs go up every year, and our revenue goes down, there is clearly a sustainability problem. Over the last couple of years, we have discontinued one big deal, and several smaller ones. We have not joined any new ones. We will very likely have to leave other big deals going into the next fiscal year. Maintaining the remaining bundles has cost our book budget, databases, and independent journals dearly. The remaining large bundles account for an ever increasing portion of our budget. Ebook bundle deals from publishers seem completely out of reach.
The subscription and licensing model for print and electronic serials is intricately linked to the health of the research library budget. If the library budget gets too far out of step with the output and pricing strategies adopted by journal publishers then subscriptions as a business model for longer term sustainability comes into question.
Publisher Initiated Business Models
319
Usage-Based Pricing A suggestion that has been proposed on the listservs in January 2012 by one publisher, MultiScience publishing, is for a publisher to set an annual fee, say of $1,500, against which a library would be able to download as many of the publisher’s articles at a per download cost of, say, $5 each. This would not only be from current subscriptions of the publisher but also from the publisher’s digital archives. At year end, if the library had made more than 300 downloads, the publisher would invoice the library for the balance. To ensure that the library is protected from unlimited liability, the publisher would set a cap – the maximum the publisher could charge regardless of how many downloads, $10,000 for a major institution, $3,500 for a smaller one. To further eliminate uncertainty, agreements could be for three years, with fixed price increases. Three years’ worth of data would then give a basis for renewing, renegotiating, or cancelling the contract. The main point is that, through this approach, industry thinking steadily moves towards a world where payment is for usage only.
Online Document Purchase (from the Publisher) As part of the journal subscription or licence agreement with the user, it is possible to download articles directly from the publisher’s web site rather than having to wait for the printed subscription to arrive on the desk. This has resulted in a substantial growth in downloads of individual articles – estimated at over one billion per year from the main STM publisher sites (see Tab. 24.1.). Tab. .: The main publishers/suppliers of online documents Publisher Elsevier Springer S&BM Wiley/Blackwell BMJ Institute of Physics Publishing Nature Publishing Group Oxford University Press Portland Press SAGE
Number document downloads ,, , , , , , , , ,
(Source: Author’s etimates based on publisher disclosed data and from Annual Reports)
320 Chapter 24
Business Models
In addition there are organisations which provide service support for smaller publishers – they also deliver document downloads on behalf of their publisher clients. These aggregators include Publishing Technology Ltd (formerly Ingenta), the combined Atypon and Ebsco’s MetaPress, HighWire Press, etc. There is a flood of individual electronic articles in circulation, most subject to an agreement whereby limited use can be made of the article online – confined primarily to the authorised end user, and subject to the strict terms of the licensing agreement. The so-called ‘Article Economy,’ something which was in vogue a decade ago (Brown, 2003), focuses on the purchase of such individual articles. It was a move away from a pre-packaged issue and volume format to buying an individual article on demand to meet a particular topical need. It never really matured as a product/service concept. Anecdotal information from larger publishers indicated that their document delivery business has generated at best between 1–2% of their total revenues. The percentage has remained stable during the past decade for the largest STM publisher. (Source: personal communication with Karen Hunter, former vice-president, Elsevier Science). There is an understandable wariness by publishers in offering to sell individual articles as this could undermine the sale of profitable subscriptions and licences. Active promotion of the online article delivery model has therefore not been a top priority for publishers. Instead they have used high individual article pricing as a means to protect the journal subscription. Several publishers charge in excess of $30 for an article, without any guarantee that the article concerned is directly relevant to the user. In these circumstances it is perhaps unsurprising that the rate of uptake has been low. Although the purchase of individual articles from the publisher’s web site remains a small activity in comparison with the revenues received from the journal programme, a recent informal study has shown that there is, nevertheless, significant activity on a publisher’s web site. According to data provided by
Tab. .: Individual Article purchases from a publisher web site Traffic per year: Non-institutional traffic per year: Document delivery sales per year: Average article price: Number of docdel transactions: Docdel conversion rate of non-institutional traffic:
million visitors million visitors $ million $ , .% (, transactions/ million visitors)
(Source: Correspondence with William Park, CEO, DeepDyve, September 2010)
Publisher Initiated Business Models
321
DeepDyve (see above in Tab. 24.2) indicates the use being made of an unnamed but nevertheless representative publisher’s STM Platform. The Chief Executive Officer of DeepDyve, William Park, also claimed that in his discussions with publishers the estimated number of visitors to their site that were ‘non-institutional’ visitors ranged from 35–60% (Park, 2009). Many of these could be counted as UKWs. There is currently limited evidence to determine whether individual article sales can be supported by a sustainable business model or not. There are few systematic price assessments to show whether the demand for individual article purchase is price elastic or inelastic. It is unclear at what price point a large section of the ‘long tail’ of end users may wish to buy into individual article purchases – the ‘Tipping Point’ (Gladwell, 2000). The shape of the demand curve still needs to be identified for different disciplines for individual article sales. This research is essential to determine how significant the – currently latent – UKW market could become substantial if suitable commercial options and business models were introduced.
‘Turnaways’ Publishers are confronted with a large number of online enquiries for articles which for one reason or another they have ‘turned away.’ This may be because the end user is not authorised or authenticated to access the article concerned, or it may be a technical reason, or because the siloed approach to the publisher’s online servers does not entitle the publisher to deliver an online article from another publisher. In many instances it is indicative of a ‘promiscuous’’ flitting behaviour by online users who are not committed end users (Rowlands et al., 2011). There are many reasons why ‘turnaways’ are occurring. Data provided by another leading publisher (whose confidentiality is also protected) gave a picture of their turnaway traffic for 2010 (January to June) and this is outlined in Tab. 24.3. One recent and small-scale (and unpublished) study was undertaken into a group of publishers who provided data on the turnaways from their web sites. From the CIBER results it appeared that as many as half of all attempts to access full-text content are turned away. Full-text accounted for only 4% of all pageviews, compared with 60% for abstracts, 25% for journal home pages, with table of contents (TOCs) and RSS making up the rest.
322 Chapter 24
Business Models
Tab. .: ‘Turnaway’ of users on one publisher web site, J-J During the 6 month period, there were 14,082,824 ‘turnaways’ of end users from the publisher’s site 84% of these turnaways were because they did not have any subscription entitlement to access to the content. 8% was because entitlement to access had expired. 7% tried to view archived content when they only had entitlement to access to current content. 1% tried to access content that was new, when they only had entitlement to access to archives Less than 1% was due to access not yet being permitted (they had paid for a subscription for the following year, not the current year). A small percentage of the turnaways were because the maximum concurrent cookies being reached or maximum concurrent users being reached (i.e. the total number of PCs that could view content on the subscription was reached so no new PCs could view it, or the total number of users that could view content simultaneously was reached, so another user couldn’t view it at the same time). Source: Personal Communication, anonymous publisher, December 2010
It has not been possible to explore the structure and significance of ‘turnaways’ on an industry-wide basis, although initial exploration has detected that some publishers are beginning to see ‘turnaways’ as indicative of a wider audience beyond their current captive markets. More evidence is required before much can be claimed about the relevance of turnaway traffic as a means of identifying ‘unaffiliated knowledge workers’. A sophisticated, industry-wide analysis of the nature of turnaways from publisher web sites could provide valuable evidence of the nature of possible new markets for scientific documents. However, this requires agreement with many publishers to allow an independent organisation to collect their log data, and a technical procedure to be put in place to analyse the logs on the lines of the small study undertaken by CIBER.
Library Initiated Business Models Interlibrary Loans The extent of interlibrary loan provision in the STM sector has been minimal. Not only are there administrative procedures which build in delays, and not only does it mean staff have to be tasked with providing a service which does not have direct benefit to the lending library, but the slow turnaround in ILL order fulfilment can be exasperating for a scientific audience which is used to quick and easy access to required publications. There is a distinction between those large research libraries which have an extensive collection, particularly those libraries which are recognised for their
Library Initiated Business Models
323
excellence in particular subject areas, and the many smaller libraries which do rely on larger libraries as ultimate backup for publication access. There is a community-minded approach by the large lending libraries in this instance, even though the servicing on ILLs may impose pressures on their library’s operational budgets. In these circumstances a charge is levied for meeting an interlibrary loan request. The amounts involved make little impact on the library’s overall revenues budget. The amount of STM interlibrary loans of journal articles is minimal. Some librarians have also implemented reciprocal borrowing arrangements between university libraries. The largest such reciprocal borrowing scheme, SCONUL Access,¹ covers most academic libraries in the U.K. and Ireland. (SCONUL, 2013). Interlibrary loans use physical copies as the media, and mail/courier services for delivery. They do not cover digital copies of STM publications – these are treated under the terms of a licence condition set by the publisher (see above under subscriptions/licensing) or by copyright set by the author/publisher which may or may not allow for free access.
Document Delivery The ubiquity of easily transmitted digital content has taken document delivery from a high-touch, labour-intensive service to a quick-turnaround activity. It has been a feature of the science communication landscape since the early 1960’s when the National Lending Library for Science and Technology was set up as a dedicated document delivery centre at Boston Spa in Yorkshire by Donald Urquhart. It was subsequently brought within the British Library (1973). Since then it has seen its operations increasingly dictated by the commercial interests of publishers as much as by the needs of end users for an individual article (Appleyard, 2010). Currently there are a number of key players in the document delivery business. These include the following: – British Library Document Supply – Copyright Clearance Center, Inc. (CCC) – Deep Dyve, Inc. – FIZ Karlsruhe – Information Express, Inc. – Infotrieve, Inc. – Linda Hall Library – OCLC – Online Computer Library Center, Inc. – ProQuest LLC 1 SCONUL Access. Available at: http://www.sconul.ac.uk/sconul-access
324 Chapter 24
– – – – – – – –
Business Models
Reprints Desk, Inc. Research Investment, Inc. Document Engineering Co Access Information Document Centre Infocus Research Subito Swets
The total annual turnover of the above document delivery centres is estimated at $144 million. This compares with annual revenues from the sale of journal subscriptions and licences of $9,400 million (Outsell, 2008). Document delivery remains in a time warp, never successfully challenging the journal subscription, nor effectively serving the interests of UKWs. These services remain on the periphery of traditional STM publishing.
Walk-in Access The number of higher education institutions (HEIs) offering non-library members access to their electronic resources has grown considerably in recent years. These ‘walk-in access’ services permit non-registered users to look at digital content, but only when the terms of the institution’s license agreements allow. As mentioned in chapter 9, for printed publications a more lenient system often operates. Research libraries allow visitors to access their collections on a reference basis, whether they are staff or students from other universities or members of the general public. Despite efforts by librarians to open up access to online, as distinct from print, this material is often subject to stricter licensing conditions. It is possible that Open Access publishing may solve this problem, but at present there is no critical mass of free digital articles which can be offered to walk in users. Walk-in access has been provided in Wales in schemes since 2011 such as WHELF. U.K. Public Libraries are also experimenting with a walk-in access scheme. These have been described earlier.
National Licensing Models A national site licence is a business model which extends on the geographical coverage of a licence from a restricted local site to a broader regional, in some cases a national community. As such it eliminates those barriers that prevent
Library Initiated Business Models
325
end users outside a subscribing site from getting access on the basis of their not being authorised or authenticated users. It makes the published information open and free to all the end users within the geographical area. There are some examples of this, though in most cases there have been difficulties in putting them in place and making them viable. The classic case is where all residents of Iceland (320,000 residents) are able to access published information. However, there is little information available about the impact that this has had on the improved scientific performance of Iceland since the national site licence was implemented. Other countries are looking at aspects of national site licence, with a more robust approach being adopted by Jisc in the U.K. NESLi2 NESLi is Jisc’s national initiative for licensing online journals on behalf of the higher and further education and national research institutions in the U.K. In 1997 Jisc launched the NESLi SMP initiative. This programme extended the NESLi initiative to include online journals from small and medium sized publishers. NESLi2 employs a specialist team of negotiators to manage aspects of the NESLi2 offers, and acquired the company Content Complete Ltd in 2009 to assist in negotiations with publishers. The content from 17 leading scientific publishers are covered by NESLi2 agreements which typically span 1–3 years in duration and make over 7,000 online journals available to authorised users. Financial savings on the content purchased amounted to £13.5 million in 2010 and it is estimated that NESLi2 has saved the community over £40 million since its inception in 2004. The content itself is made accessible directly from publishers’ bespoke web platforms. According to Jisc, the key benefits of the NESLi2 initiative are: – a clearly defined list of publishers, based on feedback from the community ensures that journal titles are selected in response to institutional demand – use of the Model NESLi2 Licence for Journals that allow staff and students to make use of the online journals included in the offers – experienced licensing staff who negotiate the most favourable pricing, terms and conditions possible – pre-defined criteria to assist the negotiation process – simplified administration for participating publishers and institutions. SHEDL (Scottish Higher Education Digital Library) The closest to a national agreement in the UK is SHEDL, Scottish Higher Education Digital Library,² as it has deals whereby all members have access to all titles of a publisher – and online-only. Facilitated by Jisc Collections and led by the 2 Elsevier Postdoc Free Access Program, available at: http://www.elsevier.com/postdocfreeaccess
326 Chapter 24
Business Models
Scottish Confederation of University and Research Libraries (SCURL) it has provided the nineteen Scottish Higher Education Institutions with access to over 1,850 online journals from publishers such as American Chemical Society (ACS), Berg, Cambridge University Press (CUP), Edinburgh University Press (EUP), Intellect, Oxford University Press (OUP), Project Muse, Springer, and also Portico. Following its launch in January 2009, SHEDL has made an impact on eliminating differential access to journal content across the sector. So, if a small Scottish university just subscribed to a few titles from a leading publisher in the past, they are now getting access to all titles, i.e. access to many hundreds of journals. This is just for universities and colleges, although the original idea was also to include NHS libraries, and this may happen at some point. In addition, RIN funded a study on SHEDL – “One year on: Evaluating the initial impact of the Scottish Higher Education Digital Library (SHEDL)” which was published in October 2010. It was undertaken by John and Laura Cox (Cox & Cox, 2010). The authors highlighted that usage of online journals within SHEDL grew faster than the UK average, particularly among those SHEDL institutions with no prior use of NESLi2.
Other Consortial, Regional, and National Site Licences Similar schemes in other countries include: – Canada has the CRKN which negotiated ‘national’ agreements, but only for universities. Canadian Research Knowledge Network (CRKN) is a partnership of Canadian universities, it is governed by a national Board of Directors, and administers an annual budget approaching $100 million. CRKN has recently undertaken a major ($47 million over three years) content expansion in the human and social sciences. – IReL is the Irish initiative. IReL the Irish Research eLibrary is a nationally funded electronic research library, initially conceived to support researchers in biotechnology and information technology in mid-summer 2004, and following on the success of this, expanded in 2006 to support research in the humanities and social sciences. In 2009 additional funds were made available for the Royal College of Surgeons in Ireland (RCSI) and the Institutes of Technology. – Portugal. Portugal has a consortium which was negotiating ‘Big Deals’ which seemed to moving towards making them ‘national’. – Brazil. The Portal Periodicos Capes is the Brazilian national electronic library consortium for science and technology. The programme is maintained by Capes, a public foundation attached to the Ministry of Education
Library Initiated Business Models
–
327
with the mission to promote the development of graduate and research programs in Brazil. More than 14 million staff and students in 152 institutions have free access to the full-text of more than 9,000 journals. It had an operating budget of US$30 million in 2005. Sweden. The BIBSAM consortium is operated by Kungliga biblioteket, the National Library of Sweden. It has 63 active member institutions including universities, university colleges, and government funded research institutions. The National Library manages about 25 licenses, including ‘Big Deal’ agreements with scientific journal publishers as well as agreements for abstract and indexing databases, encyclopaedias, and digital maps. A network of National Expert Libraries supported by the National Library negotiates additional agreements for subject-specific resources on behalf of the BIBSAM.
SCOAP³ and the Physics Community A novel approach to providing a licence for easy access to published articles in the physics area is being developed for the high energy physics community. SCOAP³ (Sponsoring Consortium for Open Access Publishing in Particle Physics) is an international consortium which facilitates open access publishing in HEP (High Energy Physics) by re-directing subscription money from libraries (Anderson, I. 2008). The aim is for funding bodies (through libraries) to buy journal subscriptions from select publishers to support the peer-review service and allow patrons to read articles without further charge. The SCOAP³ consortium has now established partnerships in some 25 countries in Africa, Europe, North America, Australasia and the Middle East. So far, these partners have collectively pledged over £5 million per annum towards this Open Access initiative to convert the entire literature of the field of High-Energy Physics to free access. As of January 2013 some 71% of the envisaged SCOAP³ budget had been pledged. As soon as further partnerships are built in Asia and South America, and more U.S. libraries of all sizes give their support, the SCOAP³ international initiative will be ready to move to its next steps: the establishment of an international governance structure and a call for tender for leading publishers in the field of high energy physics to convert their journals to Open Access in a way transparent to authors. Each SCOAP³ partner will finance its contribution by cancelling journal subscriptions. Each country will contribute according to its share of HEP publishing. The transition to OA will be facilitated by the fact that the largest share of HEP
328 Chapter 24
Business Models
articles are published in just six peer-reviewed journals. The SCOAP³ model is open to any, present or future, high-quality HEP journals. In the meantime, several publishers have offered some or all of their HEP content for free under an open access model in anticipation of SCOAP³ being fully funded. The example of SCOAP³ could be followed by other fields, directly related to HEP, such as nuclear physics or astro-particle physics, also similarly compact and organised with a manageable number of established journals in existence.
Author/End User initiated Business Models Social Networking and Social Media Social networking, social media and social tools rely on Web-based technology to transfer discrete, brief, user contributions into an activity stream, and harness a network to inform the community about issues and developments of which they may be unaware. Whilst social media may have their downsides, there is value in social collaboration for researchers: “One of the most important things that researchers do is to find, use and disseminate information, and social media offer a range of tools which facilitate these activities” (RIN, 2011). Social networking and social media are not so much structured business models, more a process which has its roots in the openness, collaboration and transparency which are features of the Web and the Internet. Social networking services such as Twitter, FaceBook, LinkedIn, Mendeley, Listservs and blogs are beginning to impact on the scientific communication process. There are many other specific social tools which address the three aspects of scientific communication – Communication, Collaboration and Multimedia. In an article published in Nature (January 2011) entitled “Trial by Twitter” the subtitle ran as follows: “Blogs and tweets are ripping papers apart within days of publication, leaving researchers unsure how to react” (Mandavilli, 2011). The driver behind such activity is the speed with which sloppy work can be weeded out before work becomes part of the established minutes of science. As the article points out, there is the need “To bring some order to this chaos, it looks as though a new set of cultural norms will be needed, along with an online infrastructure to support them.” Such changes draw new business models along with them, business models which are firmly rooted in the prevailing open information initiatives. As such these new models could embrace the needs of a wider knowledge worker sector. There is now widespread open discourse among those outside the academic/research sector. From the over 1.4 billion participants on FaceBook, to the 50.6 million MySpace unique users; from the profession and academically orien-
Author/End User initiated Business Models
329
tated LinkedIn service with 364 million users, to Twitter with 302 million users – there is a great proliferation of social media catering for all online tastes and needs in society. Open discourse has also been the norm in some areas of physics and mathematics, with arXiv.org being the central focus of such interaction. Other subject areas have been less active in making pre-publication results openly available. Key barriers to more widespread adoption of social media discourse in the sciences are that they are fragmented and unstructured, full of potential noise, and potentially too personal in the assessments they generate. “One solution could be in new ways of capturing, organizing and measuring these scattered inputs, so that they end up making a coherent contribution to science instead of just fading back into the blogosphere” (Mandavilli, 2011). The social networking movement relies heavily on the concept of ‘wisdom of the crowd,’ which claims that the wider the circle of commentators on certain types of issue, the better the results, better even than the decisions made by a small group of experts. The ‘wisdom of the crowd’ is a concept which challenges the current blind refereeing system in scientific communication. Fundamentally, the business strategy that could underpin a structuring of the blogosphere for Science is not yet in place. There may be activity, and there may be a ‘perfect storm’ building which puts pressure on the new wave of researchers to adopt more Web 2 like services; however, the models supporting such changes currently do not exist. Funding would be required from somewhere to provide the infrastructure with which scientists would feel comfortable in enabling scientific research to be built on solid foundations. There are few indicators where such funding would come from, although advertising and author payments may be among the possible options.
ArXiv ArXiv was first launched at Los Alamos National Laboratory (LANL) in the early 1990s, and relies on the tradition that existed among researchers in the area of high energy physics in particular to freely disseminate the preprints of their research articles within the global research community. It was an unstructured process until Dr Paul Ginsparg provided a central facility on his computer to collect all electronic versions of preprints and make them available quickly and without charge to anyone. It has become the key information resource among physicists ever since, having expanded from high energy physics into other physics research areas, astronomy, mathematics, computer sciences, quantitative biology and statistics.
330 Chapter 24
Business Models
The service physically migrated from LANL to Cornell University library in 2011. ArXiv has a complicated and evolving operation with an annual budget of approximately $800,000. The goal for ensuring the long-term stability (and growth and innovation) for arXiv is building a diverse financial portfolio that combines contributions from libraries, research centres, foundations, and initiatives such as SCOAP³ – blended with endowment income where possible. Stewardship of open access academic resources such as arXiv involves not only covering the operational costs but also continuing to enhance their value, based on the needs of the user community and the evolving patterns and modes of scientific communication. According to Dr Oya Y Rieger, Associate University Librarian at Cornell, in a listserv contribution of August 25th 2011, open access services such as arXiv must have clearly defined mandates and associated governance structures to reflect a commitment to the long-term stewardship of a service. Establishing a transparent and participatory governance structure will be a critical factor in generating institutional fees as well as formulating a diverse financial strategy. ArXiv administrators have recently reviewed a range of potential status options to establish a community-based support and governance structure and appropriate procedures for strategic, operational, and fiscal oversight. Endowments If arXiv were to rely on endowment payouts, it would require an endowment of $10+ million. This would be difficult to achieve in the current financial climate. Nevertheless, Cornell University is an institution which has considered endowments as an option to provide support for scientific communication projects. In their case, it would provide support for arXiv operations, although not as the sole source of income. Edward Zalta and Uri Nodelman eloquently describe both the potential and limitations of the endowments approach in their publication (Zalta & Nodelman, 2010). Submission Fees One further option is to consider charging a modest fee for researchers to upload articles to the arXiv service, say, $50. This fee could be much lower than that charged by other open access publishers (see in chapter 25 on Open Access) such as PLoS or Sage Open because there would be no editorial review programme to reject many articles. The service would still offer open access. One pundit (Joseph Esposito) attempted a rough calculation based on the arXiv activities several years ago. His assumption was that the number of articles submitted to arXiv would fall by 10% and that the resultant annual revenue would be $2.5 million. With revenues of such a magnitude, arXiv could expand further.
Mixed initiative Business Models
331
It would also free up funds at charities, philanthropies and universities that were asked to support arXiv, money that they could put to use elsewhere. Incidentally, moving to this plan would significantly reduce the overhead, paid or otherwise, of managing arXiv. “No need for all those governance committees, all that time that would have to go into fairness issues. A submission payment system can be completely automated; bits are free. Committees, on the other hand, are not free and have a way of spawning even more committees.” (Esposito, 2013).
Mixed initiative Business Models Open Access This initiative, open access, is a powerful new entry into the business models available for scientific publishing. As such it will be discussed in depth in the next chapter (chapter 25). Suffice to say that it is a business model which lends itself to supporting the ‘long tail’ of UKWs insofar as they would be able to gain access to published material ‘for free.’ A major barrier facing them for access would therefore be eliminated.
‘freemium’ and Free ‘freemium’ is the business model which incorporates aspects of a free service and some of a subscription. ‘freemium’ works by offering a basic product or service free of charge (such as software or web services) while charging a premium for advanced features, functionality, or related products and services. The word “ ‘freemium’ ” is a combination of two aspects: “free” and “premium”. The business model has gained popularity with Web 2.0 and open source companies. It was articulated by venture capitalist Fed Wilson in 2006 (Wilson, 2006). “Give your service away for free, possibly advertising supported but maybe not, acquire a lot of customers very efficiently through word of mouth, referral networks, organic search marketing, etc., then offer premium priced value added services or an enhanced version of your service to your [enlarged] customer base.” (Source: Wikipedia) The concept of ‘Free’ information has been explored by Chris Anderson (Anderson, 2009b) in his book entitled “FREE, The future of a radical price”, In this he speculated that value-added services could and should be provided in order to allow the basic information to be delivered for Free. In the case of the music industry, the real money is no longer in making songs but in the many
332 Chapter 24
Business Models
ancillary services which are provided by the artist, such as live concerts, appearances, festivals, endorsements, etc. In the case of scientific publishing, the suggestion would be to allow free access to the research article, but publishers (and authors) would receive their future main revenue streams through packaged services which provide a solution to a problem, and not just information. Anderson reflects on the huge changes which are brought about when one moves from even the smallest of price charges imposed on a good or service to one where there is no charge at all. It opens the floodgates to usage. Venture capitalist Josh Kopelman of First Round Capital highlighted the fact that there is a large divide between things which have a price tag of any sort on them, and those that are completely free (Anderson, 2009b). “The biggest gap in any venture is that between a service that is free and one that costs a penny”. The imposition of a price, no matter how low, typically decreases participation, often radically, according to Anderson. It challenges one of the main planks to twentieth century business practice – that everything has a price. Anderson shows that through cross subsidy of activities, through giving something away but seeking income from other related or premium services, it is still possible to survive commercially in a ‘free’ economy. The consequence of this, and one which is causing concern to the established players in the scientific publishing sector, is that such a move from priced goods to free goods will distort the smooth flow of research information, create a great deal more noise in the system, but essentially lead to a decline in the value and investment in scientific publishing. However, as pointed out by Anderson, the value which is conferred on society at large through a transition to Free and open information systems is much more diffuse and significant than under the current system. Now it is possible to quantify and be specific about what value is created within the scientific publishing system when price is used as a metric – when no price exists the value is created through intangible and imprecise benefits which society as a whole achieves through greater use of information, by means of increased productivity. To quote Anderson “A lot of the costs of that free lunch fall under the category of ‘externalities’ – technically there but immaterial to you”. One might add that they are often not measurable in any tangible sense. Many publisher sites offer a ‘freemium’ model without recognising the fact. Users have free access to most if not all the features of the site (search, RSS, Alerts, etc.). Also, there is usually no premium fee for advanced functionality or heavy usage that the most demanding users may be willing to pay for. Many datasets are also largely free. Global search engines are also free. The other implicit rule is that the service being offered for free must have virtually zero marginal cost, i.e. the cost of serving the n’th user is near zero. “This works in digital products/services such as content, games, and so forth
Mixed initiative Business Models
333
and we see examples of it everywhere: Skype; Angry Birds; Wall St Journal; Pandora; and soon the New York Times.” (According to William Park, CEO of DeepDyve (Park, 2011)) the ‘freemium’ rule of thumb is the “5% rule” where the business model is sustainable if just 5% of the audience pays for the add-on or premium services. Park refers to the potential opportunity to segment features more granularly to attract and convert users, for example, to charge users discretely for critical features. Also, to offer varying degrees of access to content at different prices across the spectrum of customers, such that one category of users does not cannibalise another. Instead of eroding market share of a related product the different sectors act as a feeder or ladder for users to migrate upwards to premium services. This opportunity may only grow over time as potential customers becomes more educated, aware and connected. There are indications of greater connectivity being seen in other areas of social communication which will assist such upgrading (Park, 2011). In the meantime, examples of ‘freemium’ services are slowly beginning to emerge in the scientific sector. These are still in the early days of development but show how an innovative approach, often by organisations outside the traditional publisher and library circles, are showing the way forward. At a conference organized by Jisc and OAPEN held at the British Library in July 2013 on Open Access for monographs in social sciences and humanities, there were a number of case studies presented. It was striking that nearly all used a ‘freemium’ business model where the content was free to read online (html) but a deeper experience, downloading/copying etc., required paid access rights, as did access to richer formats such as ePub. It occured to some present that STM journals could learn from this – whether ‘freemium’ (free to read, pay to download, and richer formats) would work for journals as this might be a lot simpler and perhaps cheaper for everyone. In effect the publishing industry as we see it may have to go through a radical change, but society as a whole will benefit as users of information become more productive. This is analogous to the arguments put forward by the Australian economist John Houghton (Houghton, 2009). In his summary report Houghton declares that, for Denmark, the United Kingdom and the Netherlands, free access to scientific materials could offer significant benefits not only to research and higher education but also to society as a whole. In part these are derived from the inclusion of UKWs within the scientific information system. How could STM publishers survive in such an open environment? – By giving away for free much of what it publishes and by seeking returns from providing additional, premium services. – By offering much better service level agreements, support and linked services.
334 Chapter 24
– –
Business Models
By attracting a share of the burgeoning online advertising market. By seeking new sources of revenues in complements – investing in research workflow processes, creating large data resources, etc.
“Free creates a lot of value around it, but like many things that don’t travel in the monetary economy, it’s hard to properly quantify” (Anderson, C. 2009b) and also hard to identify.
Alternative Business Models The disenfranchised rarely have institutional backing for information access and are not able or prepared to pay high subscription and article rates for their publications. Hanging over both is the open movement, whereby the expectation is that much of the user’s information needs would be met from information services that are free. In which case, can the subscription system be maintained if the ‘long tail’ of the knowledge worker community is to be addressed?
SDI/RSS/Alerts A solution would be to ring fence the research library (institutional) market with the subscription system, and to reach the ‘long tail’ with a heavily discounted price. The two price options would need to coexist. This requires ‘Price Differentiation’. It also requires ‘Market Segmentation’ – clearly defining distinct audiences which can be standalone as far as price setting is concerned. Such segmentation should avoid the capability of other audiences to reach into beneficial lower pricing options, but also avoid being too bureaucratic to engender the same fate as journal subscriptions. Greater granularity in the offerings from a publisher would enable differential pricing to be applied and segmentation to occur – instead of the ‘Big Deal’ licence, or even the journal subscription, the ability to buy individual articles (or even parts of an article) would meet the true demands of individual researchers. This is not a new concept – early on in the evolution of digital publishing there was the notion of ‘SDI’ or Selective Dissemination of Information. This required a profile of the researcher’s interest profile to be matched against the incoming stream of scientific research results, with only those that were deemed to be relevant being passed on to the individual end user. SDI attracted little attention during the 1980s and 1990s as the subscription and licensing systems gained traction. First degree price discrimination, in which each customer is
Overview of Main Business Models
335
charged the maximum amount the customer is willing to pay for a particular good or service, is the ideal. For a long time this was regarded as a mainly theoretical construct. SDI has been updated with newer technologies and services. RSS (rich site summary or really simple syndication) and Alerts programmes, provide the same targeted approach as SDI. However these have become popular on a site by site basis – with each information provider or publisher establishing their own SDI/RSS service. This only works if the site is dominant in any particular subject/discipline. Otherwise it requires foraging around across multiple standalone data sites and integrating the alerts provided in a consistent way. Nevertheless it is possible, in a fully digital and Internet enabled world, and with a wide grey market of UKWs to be reached, that such ‘personalised’ information systems could be implemented through reaching an agreement with the publisher/information provider. Privacy issues would of course need to be tackled with sensitivity. ‘Public concern about privacy is largely unfocused. Individuals complain about their loss of privacy, but rarely articulate what it is that they are specifically upset about’.
Overview of Main Business Models The following list summarises the key points about each of the business models. The common feature is that few proactively seek to extend the outreach of publications to non-affiliated professional or related audiences. The other problem is that information suppliers have little evidence of the size and needs of the various communities which make up the knowledge worker market.
Publisher Initiated – Serial subscription and site licensing model (including document downloads) – Subscription and licence business model depends heavily on the health of the institutional library budget. – Subscription model faces a budgetary challenge as ‘Big Deals’ come under scrutiny and the gap between scientific output and library budgets continues to grow. – Large commercial publishers are into acquiring greater market share of the (static) library budgets at the expense of smaller publishers and other areas of library spending.
336 Chapter 24
Business Models
–
–
Subscription/licences represent a base business model from which publishers need to build other activities, such as Gold open access, Premium information services and a more equitable royalty stream from supporting individual article sales. Online individual document purchase (from the publisher site) – Publishers are currently reluctant to pursue sales of documents from their own servers as it may cannibalise on the subscription business model. There is however no substantive evidence that this is the case. – Publishers also face higher administrative charges associated with collecting large numbers of micropayments for individual article sales. – There is a high price deterrent set by publishers which detracts from more sales of individual articles.
Library Initiated – Document delivery – National and international formal document delivery has been declining rapidly during the past decade. – National science libraries are focusing on improved efficiencies to ensure that the costs of continuing to acquire a comprehensive journal collection are covered by the charges set for document delivery. – For e-delivery such national document delivery services are in the hands of the publishers – the royalties plus in-house costs combine to make them uncompetitive (as compared with a direct service from the publisher(s) who control the royalty rates). – However, they are able to perform a valuable service by creating an integrated catalogue of all electronic articles (such as BL’s ETOC) and establishing a centralised one-stop purchasing centre (thereby avoiding searching across thousands of publishers’ silos). – Interlibrary Loans – Sharing of published resources among research libraries is mainly focused on book exchanges rather than articles. – It imposes a significant cost burden on those large, comprehensive research libraries in supporting the many smaller libraries. – This represents more of an escape valve or fall-back for libraries if they can’t get serve their local patrons from any other source. Not a preferred solution.
Overview of Main Business Models
–
–
–
–
337
Walk-in access – Walk-in access procedures are currently being formalised, but remain subject to the terms of the licences agreed with the publishers. Alumni – Several projects involve experimentation with delivery of publications to the university’s alumni without authentication barriers being put in place (ProQuest Udini). Public library access – Publisher Licensing Society is conducting a trial to see whether a sustainable business can be developed whereby public libraries can be brought within nationally negotiated licences with publishers. – Similarly, Jisc Collections is conducting trials to see whether SMEs can be included within national academic licences. National Licensing models – As a business model has been much discussed in the past, particularly by Jisc. – Problems of organising a feasible structure, and getting buy-in from a representative set of publishers and libraries, has limited its scope as a viable business model. – However, complex issues such as cross institutional use, home access, etc., would all be resolved through a broadly circumscribed licence model. – There are a number of examples where subject-based site licensing (in physics, through SCOAP³), or national site licences (Iceland) which can be analysed for their effectiveness and ease of implementation. – Does require central coordination, and central funding to make this happen.
Intermediary Initiated – Pay-per-view (PPV) – The library models mainly focus on enhancing access to established institutional research centres. PPV has the potential to reach beyond the library into wider knowledge worker sectors. – There are a leading experiments being undertaken with DeepDyve being an example of one where articles are ‘rented’ rather than ‘bought’. The British Library is also developing novel ways of capitalising on its vast collection to serve an ‘article economy’ – One key feature of these PPV initiatives is that they address a new market sector for scientific publications – the professions, SMEs and individual scientists or knowledge workers in general
338 Chapter 24
Business Models
–
–
The numbers of knowledge workers (though with each not having as great an annual use of research publications) together creates a new demand sector worthy of exploring – it is the as yet unquantified ‘long tail’ of scientific communications. Premium subscription services (‘freemium’) – Whilst some services (such as the above) address the general scientific communication issues, there are other business models being developed for projects which focus on specific disciplinary areas with a broader range of information services – not just journal articles (Knewco, etc.). – As these develop the scientific publishing process could take on new directions and adopt new business models. – The publishing industry may need to take on board the openness, collaborative features of these new information systems and construct business models which include some ‘free’ elements on which premium services can be built (and charged for).
Author/End User Initiated – Social networking and social media – Academia may be the final bastion against the onward march of social networking and social media adoption, but the Perfect Storm forces are in place in the rest of society to attract greater adaptation to Web 2 services in future. – Social networking may be the process whereby scientific communication is interfaced with the needs of knowledge workers in future. – On one hand there is a vast array of social networks (involving millions of members each) and on the other are the knowledge workers (involving millions of practitioners) who have an ongoing need to be informed about developments of direct or indirect interest to them. The combination is a powerful force to create new and appropriate business models to meet the user needs.
Implications for Unaffiliated Knowledge Workers Business models are made more complex as the industry migrates from a print to a digital delivery system. What was good in a printed journal scenario – its quality refereeing; its support from authors and readers; its institutionalised purchasing system through established library collections budgets – are no longer of paramount relevance in a digital world. Speed of delivery; effective
Implications for Unaffiliated Knowledge Workers
339
search tools and low price thresholds take prominence. There is little overlap between the two systems. This is why a parallel business model – one catering for institutions and delivery of premium services; the other targeting the individuals and UKWs with discounted pricing strategies may be appropriate. This parallelism reinforces the current strains seen in the scientific communication system as described in an earlier chapter – ‘frustration factor’ and ‘serials crisis’ for example. It sets the industry on the cusp of change – a change which if driven by ‘freemium’ and ‘long tail’ concepts should embrace UKWs to a greater extent. Openness is a key factor – whether it gets built into future accepted business models will dictate how easily UKWs will be incorporated and their differing information needs included. The Open Access movement as it impacts on STM is reviewed in chapter 25.
Chapter 25 Open Access ‘Openness’ Openness is a broad social movement involving significant cultural change. For publishing specifically it replaces ‘toll-based access’ with ‘free access’ as the underlying business model. However, this runs counter to tradition and legacy which are established features of scientific publishing. Notably, openness conflicts with forces protecting intellectual property and copyright, which are equally powerful social movements with strong commercial support. A prominent champion of the Open Access movement in the UK has been Jisc. It funded a number of research studies, several of which suggest that if Open Access could be implemented by all relevant parties “The increased impact of wider access to academic research papers could be worth approximately £170 million per year to the UK economy” (Read, 2010). Similarly, research funding agencies have been vocal in their support for Open Access for the dissemination of research results arising from their investments, such as the Wellcome Trust’s support for research in the biosciences/medicine. Against such libertarian organisations there are the forces of the commercial publishers, notably through their national and international trade associations such as IPA (International Publishers Association), The International STM Association, and the American Association of Publishers (AAP). They have equally deep pockets to fund lobbying activities to generate support for the commercial approach to scholarly communication and protection of copyright. These will be commented upon in the next chapter 26.
Greater Openness There is demand for ‘free’ services on the Internet; it is inherited from the philosophy of the Internet’s founding fathers. The free access mentality coincides with the emergence of new information tools – such as global search engines, notably Google/Google Scholar, Yahoo, Scirus, Medline, PubMed, arXiv – which point out, often for the first time for potential users, the availability of research results which may be of relevance to them either in their personal lives or professional career. These search tools are free to use, mainly because technology is getting cheaper all the time and these services build on low technology costs.
Open Science
341
Free search tools are a major factor in stimulating the awareness of knowledge workers outside academia. Under the print paradigm such discovery tools leading to article awareness were lacking. Openness also involves ‘free at the point of usage.’ This has been advocated by Chris Anderson (Anderson, 2009b) in his book ‘Free – the future of a radical price’ (see previous chapter). He makes the point that the Internet encourages free access to basic information services (such as research articles) with commercial returns being sought from other activities such as offering premium services, seeking advertising sponsorships, etc. Charging any price, even a few pence, for accessing research articles online would kill off much potential demand particularly from UKWs. In practice, openness allows a researcher to extend his/her research horizons. As pointed out by Michael Neilsen: A scientist can freely download as many articles….. on any … subject as they wish, whilst other people are kept out by the fees charged. It is as though there is a wall dividing humanity. The Open Access movement is trying to break down the wall. (Neilsen, 2011).
Open Science The research and publishing site, ScienceOpen,¹ builds on the idea that scholarly publishing is not an end in itself, but the beginning of a dialogue to move research forward. ScienceOpen provides a holistic authoring environment in support of the research communication process, putting useful resources at one’s fingertips. The site offers features that are requirements for Open Access and some that raise the bar for other publishers. Articles are published under a common and flexible Open Access license (Creative Commons-BY, which allows free use of the work as long as the original author is credited) and are fully compliant with funder mandates. Preview articles that have passed an internal five point editorial check receive a Digital Object Identifier (DOI) from CrossRef, an independent accreditation agency, so the work can immediately accrue citations. Open peer review takes place after publication and article level metrics are provided in collaboration with Altmetric (a system to measure quality in articles). It also provides free collaborative workspaces where researchers can develop their manuscripts as a team without the need for email. Informal scientific communication takes place in public groups or through social networking
1 ScienceOpen is a freely accessible research network to share and evaluate scientific information.
342 Chapter 25
Open Access
tools. Participation in reviewing and commenting, on both ScienceOpen articles and all the Open Access content available on the site, involves free membership. Members are allocated different privileges depending on the previous publication history that appears on their unique ORCID research identity (ORCID is a persistent digital identifier that distinguishes a researcher from every other researcher). The role of ‘openness’ in Science has been highlighted by another pioneer of the Open Access movement in science publishing – Vitel Tracz – who created the BioMed Central publisher (BMP) in 2000 before it was sold to Springer S&BM in 2008. Tracz has since then moved from the Open Access in publishing to openness in Science, a much broader concept. In a YouTube presentation Tracz claims that conventional publishers have not done Science good service. Open Science in Tracz’s terms enables the wide range of media associated with a research project to be made available online with only initial and cursory refereeing. The range of source material becomes dynamic enabling other experts and researchers to comment on the ‘manuscript’ online. There are risks associated with this but the immediacy of dissemination and its constant iteration into new and more useful forms makes Open Science more palatable than the rigid structure of traditional refereeing and STM journal publishing. However, Open Access is not necessarily the solution because of lack of agreement on the problem to be resolved. Some supporters of Open Access are motivated by the need to revamp traditional subscription based systems because of simmering resentment against the profits earned by scientific publishers; others pursue Open Access because it has the potential to democratise scientific communication; and others such as Tracz see Open Science as an alternative way to provide a sustainable commercial future for scientific communication within an Internet environment. It is this diversity which is one of Open Access’ strengths, according to the late Fred Friend (in a bulletin board item on 7 November 2013 on LibLicense-L). “It is not that Open Access is presented as a solution to problems but as an alternative way forward arguably more cost effective than the present infrastructure”. And then “The Open Access principle is sufficiently flexible to be applied in different ways, using different forms of the model for different forms of publication, in different cultural environments and within different research funding structures”. (Friend, 2013) Friend published his thoughts on open access in an article in Serials in November 2007 (Friend, 2007). Whilst there is lack of clarity on what the main driver to Open Access should be there is also confusion, conflict and misunderstandings on the policies to be applied. ‘Open Access’ in scientific publishing takes on some distinctive forms. However, they all highlight that there is ‘free at the point of usage’ to end users.
Grey Open Access
343
Open Access The above ‘Openness’ in the generic sense is not to be confused with the precise definitions applied to the various Open Access movements in publishing. The latter do have a role, but what they miss is a focus on developing outreach to the wider knowledge worker sector, to individuals, through the adoption of relevant commercial, marketing and editorial strategies. As with the toll-based (subscriptions and licensing) approaches of commercial journal publishers, Open Access journal publishers still see the pre-packaged e-journal aimed at institutional library collections as their main target for onward dissemination. There are several types of Open Access: – Less common is the so-called Grey route, which involves an informal exchange of a near-to-final preprints of articles being circulated by the author to all potential interested parties, using the author’s own web site as the resource from which such articles can be downloaded. – The Green route involves the author being advised to deposit a version of the article (not necessarily the final published version) in a local repository or global subject repository. – The Gold route involves the author (or their institution or funder) paying a publisher to enable the article to be published, and that the article should then be made available free of any further charges. These author charges are known as APCs (Article Processing Charges). – Others claim there are Platinum and other colour schemes, but these are marginal compared with the main Gold and Green distinctions. – Finally, there is Hybrid Open Access which means that Open Access (free) articles are included in a subscription-based journal – with some articles in the journal issue being accessible for free whereas others follow the traditional route of being subjected to a toll-based access charge.
Grey Open Access There is a thread going back decades within the scientific publishing system which set the scene for what has become a largely unheralded ‘Grey’ Open Access movement. For many decades it has been an aspect of scientific publishing that – in return for the initial work put into the research and subsequent writing up of the results for publication – an author was entitled to a number of physical, printed reprints of the published article. The numbers of such reprints supplied varied
344 Chapter 25
Open Access
according to the policy of the publisher, but some 20–30 copies were typical. In addition the author could also purchase additional printed copies from the publisher. These were used by the author to post to colleagues, peers, friends and family, either proactively or in response to a request. They were an informal exchange of such reprints, all free to the end user, all open and through the mail. But it was in essence peripheral to the mainstream toll-based journal publishing system. Along came the Internet and the web, bringing electronic publishing and online dissemination capabilities. No longer was it necessary to send reprints by snailmail – downloading copies from file servers became an option. It was faster and more convenient. Instead of potential users requesting a printed reprint from authors, they could now collect the electronic version from the author’s web site, often without the awareness or agreement of the author concerned. The leading researchers in any given field – the ‘invisible college’ – usually had their own web sites which would include their broader academic achievements, links to important external developments, as well as their own publications in full text and online. But there is a problem. The final version of the article’s content – the socalled article (or version) of record – technically belongs to the publisher if the author had transferred copyright to the publisher. In the past the author willingly gave up many of the intellectual property rights (IPR) in order to get the publication into the scientific system. This was to gain international recognition, more research funding, and in some cases tenure, stemming from successful exposure of their research output by the publisher. However, this means that the author would be contravening the publisher’s copyright if they allowed anyone to access to download the final (or stage 3) article. But Grey Open Access does not specify that the final version be the one which authors would make available on their personal web site. They could equally include an earlier version – before final mark-up and pagination is undertaken as in stages 1 or 2 of the manuscript preparation – on their own site, and allow access without contravening copyright. It could also include supporting data, lab notes, software and audio/visual support material – much more flexible and extensive than just relying on the text-based article alone. This Grey Open Access is not always accurate or citeable. But as an indicator of the broad research results it is often useful – and is available for free and quickly and without complicated administrative and financial barriers having to be overcome. However the real difficulty with such an informal system is that there is a wide global scattering of researchers in any one area, some of whom may have their own up-to-date web site, others who do not. It is unclear how extensive the
Green Open Access
345
‘Grey’ Open Access movement has become – how many leading researchers have created their own web sites and included their published articles and data on the site. This is one of the many ‘known unknowns’ which exist in this area of Open Access. It is perhaps surprising that so little research has gone into exploring the actual reach achieved by the Grey Open Access system. Skipping between various author web sites to pick up their latest (stage 2 and earlier) articles is time consuming and not always comprehensive. It was only in 1991 that Dr Paul Ginsparg, then as the Los Alamos National Laboratory (LANL) in the US, provided a centralised facility whereby physicists – high energy physicists in particular – could deposit their research findings in a central facility that a new aspect of Open Access began. This augured in the era of green Open Access.
Green Open Access Green Open Access is centrally managed within an institution, and adopts more systematic and structured approaches particularly on issues such as metadata provision. Green Open Access and institutional repositories are in effect an item. Initially the repository was subject based – such as the arXiv database of physics articles which emerged from Dr Paul Ginsparg’s efforts in the early 1990’s. During the past two decades there has been a spread of subject focused central facilities covering most areas of physics, mathematics, computers and IT and some aspects of social sciences and economics. A growing number of medical publications have also become available through a central subject based repository – PubMed Central in the US; Europe PubMed Central in the UK and Europe. They all have the same mission – to enable easy, unencumbered access to the latest results of a researcher. In theory it combines all grey web sites of researchers in a given subject area. It also includes manuscripts from authors who do not have their own web site. The early subject-based collection of manuscripts is now being complemented by the emergence of research institutions – universities, research institutes – which create their own repository and ‘require’ their staff and researchers to deposit their results in it. This is no longer subject based. They include all disciplines within the same repository as dictated by the breadth of the research scope of the institution itself. The aim is that each institution supporting research would have its own resource – the institutional repository – into which all the authors from the institution would voluntarily deposit the latest research results. It could become the means to project the image of the institution as a research centre or centre of excellence in certain fields.
346 Chapter 25
Open Access
The repository – in many cases coordinated through the library – would manage the inflow of material to ensure that it met quality and bibliographic standards. They would also adopt accepted international standards to allow the deposited articles to be freely accessible to anyone throughout the world by means of processes such as Open Access Publications – Protocol for Metadata Harvesting (OAP-PMH). The key is to ensure that the metadata is applied to the article – a bibliographic process which is close to the professionalism of the librarian – so that in addition to OAP-PMH services, specialised Open Access search engines such as OAIster and more general search engines such as Google and Google Scholar could find relevant works across the global network of IRs (institutional repositories). In the past decade there has been a slow growth in Green Open Access to the extent that there are now – fourteen years after it was first established at a conference known as the Budapest Open Access Initiative (December 2001) – some 1,500 institutions worldwide which have repositories of articles written by their staff/researchers. 11.9% of all scientific articles published in 2008 were available through some form of Green Open Access. The claim is that in 2009 this had risen to 15%. (Bjork et al., 2009). The numbers continued to grow as more funding agencies ‘mandated’ or demanded that their grant beneficiaries and local research staff make research findings available in an Open Access system. These funding agencies include the National Institutes of Health in the U.S.A., the Wellcome Trust, the Research Councils in the U.K., and increasingly the European Commission. However publishers see Green Open Access as a destructive force and one which if adopted wholesale would undermine the established scientific publishing edifice. In particular it would destroy the refereeing infrastructure from which quality is imposed on publishing output. The lack of control and authority would allow much noise to creep into the system which would be counterproductive to the research effort. The European Commission funded a major study of the impact of Green Open Access to see how the publishing system was being affected. The International STM Association was the main coordinator of this research effort. The study, known as PEER, produced a number of reports (PEER, 2012) with one of the more surprising results being that the deposition of articles into IRs had little effect on the market for published articles – in fact there was some suggestion that it enhanced the sale and exposure of the toll-based version of the article. In a separate statement issued by the International Association of STM publishers (2011), it emphasised their commitment is to the wide dissemination and unrestricted access to content they publish, on the understanding that the services that publishers provide is paid for. In particular that the licences which
Gold Open Access
347
have been agreed between publishers and libraries, and between publishers and authors, are held as sacrosanct.
Gold Open Access Gold Open Access is where the author pays the publisher to have their article published which then enables the article to be read and downloaded for free by anyone accessing the publisher’s web site. Payment for the articles to be published is set by the publisher and can range from £1,000 to £3,000 per article. These charges are known as APCs (Article Processing Charges). Though the individual author may be charged there are many grounds whereby such charges can be waived. There are some significant players who have been instrumental in bringing this business model to fruition. One is Dr Michael Eisen. Eisen is an evolutionary biologist at University of California Berkeley and an Investigator of the Howard Hughes Medical Institute. He is also co-founder of the Open Access publisher Public Library of Science (PLoS). Founded in 2000, PLoS was conceived as an advocacy group which had its roots in the complaints which leading researchers had against the excessive profits made by some of the largest publishers who relied on prepaid subscriptions for their income. PLoS’s first initiative was to publish an Open Letter and invite scientists around the world to sign. Those who signed pledged that henceforth they would “publish in, edit or review for, and personally subscribe to only those scientific journals that agreed to grant unrestricted free distribution rights to any and all original research reports that they have published, through PubMed Central and similar online public resources, within 6 months of their initial publication date.” Nearly 34,000 scientists from 180 countries signed the pledge. While a small handful of publishers complied with the demands outlined in the letter, most of the scientists who were signatories ignored their own pledge, and continued publishing in the very journals of which they had been critical. Undeterred, Eisen and the other two PLoS co-founders – biochemist Patrick Brown, and Nobel Laureate Harold Varmus – reinvented the organisation as a nonprofit publisher, and in 2003 they launched an Open Access journal PLoS Biology. PLoS Medicine followed a year later. These titles transformed the way scientific journals are published. PLoS was able to become a publisher thanks to a $9 million grant it received in 2002 from the Gordon and Betty Moore Foundation. The challenge was to become financially sustainable before the grant ran out. With this aim in mind, PLoS decided to levy a one-off article-processing charge (APC) for each paper published. The publisher then makes all the
348 Chapter 25
Open Access
papers freely available on the Web. PLoS publishes seven Open Access journals and is also experimenting with new Open Access services such as PLOS Currents, which aims to minimise the delay between the generation and publication of new research. Papers are published within days of being submitted. Besides the emergence of PLoS, there is also BioMed Central. Just over a decade ago, the new author-pays business model for scientific publishing was introduced by the entrepreneur and pioneer Vitek Tracz. Backed by funds from the sale of his previous publishing business to Elsevier, he then proceeded to establish something which challenged the heart of the traditional commercial scientific journal publishing operation. He launched BioMed Central (BMC) which gave the opportunity to researchers to have their final, stage 3, articles disseminated to everyone irrespective of their affiliation. Tracz recognised that the prime beneficiary of the traditional scientific journal publishing system was the author. If the author achieved global visibility and respect for the quality of his research, as summarised in the published article, he/she would be in a better position to gain additional research funding, tenure, and image recognition. As such the author should pay for the publication charges and not the reader. BioMed Central was set up with a stable of biomedical journals, each of which carried an Article Processing Charge. Initial levels of APC were £500, but this has since risen to between £1,100 and £1,600 depending on the journal concerned. Besides PLoS and BMC, another example of Gold Open Access publishing has been Hindawi, a Cairo-based publishing house which transferred its commercial business model from a subscription to Gold Open Access. In this instance it works because the low editorial costs as a result of being based in Egypt, together with an educated home base, has made this a viable proposition. One unfortunate side effect of the Gold Open Access business model is that it enables unscrupulous individuals and publishers to set themselves up as Open Access publishers, use broadcast emails to solicit articles from all and sundry, and levy an APC on each submitted item. These journals carry little creditability – in fact some never appear. They are an attempt to exploit the system, the low barriers to entry, and make a rapid commercial return with few risks. This aspect of Open Access is being monitored through a list compiled by Jeffrey Beall, librarian at University of Denver – a list which has become known as ‘Beall’s predatory list.’²
2 Scholarly Open Access – Critical Analysis of Scholarly Open Access Publishing. Available at: http://scholarlyoa.com/
Gold Open Access
349
As far as STM publishers are concerned, Gold Open Access is the lesser of two evils of Open Access (compared with Green), although from discussions held with U.K. researchers in recent months it appears that confusion and scepticism still runs rife at the grass roots level. Nevertheless, research funding agencies in the U.K. are pushing forward with demands that the results of the work that they funded be made available in Gold Open Access form. In this respect the U.K. is increasingly out of line with developments in other western countries in this respect, see chapter 26, where Green OA rules. Libraries are seen as a potential key stakeholder in the development of one aspect of the Open Access movement – Green Open Access – through administration of Institutional Repositories on behalf of their institutions.
Institutional Repositories Green Open Access requires initial investment in a technical infrastructure. The infrastructure needs a central repository into which articles are deposited, and an access procedure which allows the deposited articles to be located and downloaded. These involve software and hardware investments being made within the institution. There have been several studies which have looked into the role of Green Open Access’s institutional repositories within scientific communication. In a report by Beckett and Inger entitled “Self Archiving and Journal Subscriptions: Coexistence or Competition?” the relationship between subscriptions and Green Open Access was investigated. This report (Beckett & Inger, 2007) used conjoint analysis to separate interacting variables involved in a librarian making a cancellation decision. Six variables were considered, including cost, quality, currency, version of articles, and percentage of articles being available. A similar study had also been undertaken by Rowlands and Nicholas from CIBER (CIBER, 2005). This was not focused on librarians but rather on researchers or authors. The report, “New Journal Publishing Models – An International Survey of Senior Researchers”, reported on the behaviour, attitudes and perceptions of 5,513 senior journal authors on a range of issues. Their conclusion was that the population in the study did not give much support for “alternative” publication systems such as the deposition of research results within IRs. Authors were not knowledgeable about IRs – less than 10% declared they knew “a lot” or “quite a lot” about this development. A significant group, 3,089 of the 5,513 authors, “knew nothing at all” about IRs, and a further 1,888 “knew a little”.
350 Chapter 25
Open Access
This study was preceded by another similar study commissioned in the UK by the Publishers Association and undertaken by Rowlands (Rowlands, 2004). The report was compiled from the results of 4,000 responses. It highlighted that, at that time, many authors made use of the home pages or departmental web sites to make some of the research findings available. 32% of the respondents used the Web in this way, and that 53% might do so in future. A small minority dismissed this as a possibility. About one-fifth (21%) of respondents said they deposited scientific material in an Institutional Repository and just over 55% said they might do so in future. 15% said they had not done so, and had no intention of doing so. Those publishing in computer science, mathematics and engineering were the most likely to have their work made available through an institutional repository. A related study was published by Alma Swan and Sheridan Brown from Key Perspectives Ltd, a UK-based consultancy with strong credentials in the Green Open Access movement, a year later. (Swan & Brown, 2005). This study included results from four sources which together gave them almost 1,300 responses. According to their survey, 49% of the respondents had self-archived at least one article during the previous three years in one of three ways – in an institutional repository, 20%; in a subject-based repository, 12%; or on a personal web site, 27%. Of the authors who had not self-archived their articles, 71% were unaware this was an option available to them. However, 81% of authors would comply with a mandate from their employer or research funder to deposit their articles in an IR or subject-based repository. 5% remained adamant they would not comply with such a mandate. These felt that self-archiving might disrupt the current scientific publishing model.
The Houghton Reports A significant contribution to the debate about the economic and strategic significance of institutional repositories and green Open Access has been provided by Professor John Houghton and his team at Victoria University, Australia (Houghton & Sheehan, 2006). Their original study was an attempt to quantify the benefits which society at large would reap from a switch in scientific publishing business models from toll-based (subscriptions) to Open Access. The economic model used was based on a workflow model of the existing scientific publication process. It drew much criticism from the publishing industry at the time. However it demonstrated that using the Houghton
Gold Open Access
351
formula, society stood to make considerable savings from a switch to Open Access. In fact the 2006 report was followed up by one which looked at the specific situation in the U.K. In January 2009, a Jisc-funded report by Houghton, Oppenheim and a team from Loughborough University claimed that the savings from a switch to Open Access would save the U.K. almost £160 million (Houghton et al., 2009b). Houghton found that with a 20% return on publicly funded R&D for the major categories of research expenditure in the U.K. in 2006, a 5% increase in accessibility and efficiency would have been worth £124 million in increased returns to Higher Education’s R&D, with around £33m in increased returns for the research councils’ competitive grant-funded R&D. The publication, “Economic Implications of Alternative Scientific Publishing Models: Exploring the Costs and Benefits” (Houghton et al., 2009b), was quickly followed by the publication of a joint statement on 13 February 2009 by the three main U.K.-based publisher associations who questioned some of the results and assumptions – and most significantly the extent of publisher involvement in the study. Further studies were undertaken by Houghton in the Netherlands, for example (Houghton et al., 2009a), as well as in Denmark (Houghton, 2011). In addition, the Knowledge Exchange in Europe commissioned a further report from Houghton (Houghton, 2009). Knowledge Exchange is a partnership of the academic IT services from Denmark (DEFF), the United Kingdom (Jisc), the Netherlands (SURF Foundation) and Germany (DFG). A press release issued by the Knowledge Exchange on July 1st 2009 commented on the benefits of Open Access and how they outweighed the costs across all national research structures. According to the report, for Denmark, the United Kingdom and the Netherlands, free access to scientific materials could offer significant benefits not only to research and higher education but also to society as a whole. Adopting this model could lead to annual savings of around EUR 70 million in Denmark, EUR 133 million in The Netherlands, and EUR 480 million in the U.K. The report concludes that the advantages would not just be in the long term; in the transitional phase too, more Open Access to research results would have positive effects. There are now (in 2015) 6,022 journals in the directory covering Open Access journals (DOAJ). Currently 2,563 journals are searchable at article level and 501,354 articles are included in the DOAJ data service. The growth in numbers of repositories worldwide supports the expansion of the Green route infrastructure (Fig. 25.1):
352 Chapter 25
Open Access
Growth of the OpenDOAR Database Worldwide
Cumulative Number of Repositories
3000
2000
1000
0 2005 2006 OpenDOAR 24-Oct-2011
2007
2008 2009 Date Added
2010
2011
Source: OpenDoar data service, Centre for Research Communications, University of Nottingham, U.K.
Fig. 25.1: Growth in the number of institutional repositories (IRs) worldwide
By June 2015 the number of institutional repositories recorded in the OpenDOAR database had climbed to 2,600.
Mandates The move towards greater openness could take years or generations to come into effect. A ‘tipping point’ had to be created – without an external force being applied to create the tipping point things would remain as they are. Mandates to change researcher practices by funding agencies could create the tipping point. Higher education institutions and funding agencies increasingly require the beneficiaries of their support to make the results of research efforts available free at the point of usage through local or subject based repositories (through mandates). This is the crux of the Green Open Access movement, putting the onus on the individual researcher to self-archive in such repositories for fear of losing future funding if they fail to do so. However at present (May 2015) the number of mandates are few and their enforcement is often cursory. There are almost 700 such unique mandates recorded worldwide in the Registry of Open Access Repositories (ROAR) Mandatory Archives Policies. Fig. 25.2. shows the known growth of mandates.
Mandates
353
350 300 250 200 150 100 50
2003 2004 2005 2006-1 2006-2 2006-3 2006-4 2007-1 2007-2 2007-3 2007-4 2008-1 2008-2 2008-3 2008-4 2009-1 2009-2 2009-3 2009-4 2010-1 2010-2 2010-3 2010-4 2011-1 2011-2 2011-3 2011-4 2012-1 2012-2 2012-3 2012-4 2013-1 2013-2 2013-3
0
Funder Mandate
Institutional Mandate
Multi-Institutional Mandate
Sub-Institutional Mandatea
Fig. 25.2: Growth of Mandates
The breakdown of these mandates by type of organisation can be seen as follows: – Funder (78) – Funder and research organisation (54) – Multiple research organisations (8) – Research organisation (e.g. university or research institution) (482) – Sub-unit of research organisation (e.g. department, faculty or school) (71) (source: ROARMAP, May 2015) The above are still only a small fraction of the world’s total number of universities, research institutes and research funders. According to Arthur Sale (University of Tasmania) there are an estimated 10,000 research universities worldwide. The U.K. has 54 mandates (out of 160 higher education institutions), and the U.S.A. has 81. The E.U. is also beginning to catch up. Therefore, only a small proportion of the universities in the U.K. have mandated Open Access, and only a few of these are strictly enforced, and only a few of the mandates are designed to be fully interoperable between institutions. Nevertheless, there is considerable grass roots pressure being exerted by the Green Open Access supporters on both sides of the Atlantic to expand the coverage and acceptance of mandates, a process which could do much to change the face of scientific communication in years to come. The OpenDOAR registry – which includes mandated but are mainly focused on voluntary repositories – has grown to 2,600 repositories worldwide as of June 2015, and the Registry of Open Access Repositories (ROAR) puts the figure at 4,009. Supporters of the Green Open Access movement used the process of funders’ mandates as a stick with which to beat the scientific publishing industry. Many
354 Chapter 25
Open Access
felt that the profit levels achieved by commercial organisations such as Elsevier, Wiley, Springer, etc., were exorbitant and contrary to the welfare of science. That commercial publishers were ‘parasites on the body politic of scientific endeavour’ and any means whereby their functions could be trimmed would be supported. Not all Green Open Access advocates are so belligerent, and Professor Stevan Harnad who is seen as the leading international proponent of the Green Open Access movement has been at great pains to claim that traditional scientific publishing and institutional repositories containing pre-published manuscripts could coexist. Meanwhile, ten organisations from the U.K’s higher education and research sector have joined forces to form the U.K. Open Access Implementation Group (OAIG), which seeks to drive the implementation of Open Access in the UK. It will coordinate evidence, policies, systems, advice and guidance, to make Open Access an easy choice for authors and one that benefits all universities. However, there is the other side of the coin – the authors themselves face perceived and actual barriers in their dissemination of their own research results to affiliated and unaffiliated audiences alike. In a study undertaken by Sally Morris on behalf of the Publishing Research Consortium, Morris (2009) examined what is permitted by publishers’ author agreements, and what authors think they are allowed to do. The policies of 181 publishers, representing 75% of published papers, were examined; 1,163 authors responded to a questionnaire about what they wanted to do with their manuscripts, and what they believed publishers allowed them to do. It appears from this study that authors significantly underestimate what they are allowed to do with both the submitted and the accepted versions of their articles; they are also unaware of how many publishers allow use of the published PDF version for sharing with colleagues, incorporating in their own work, or use in course packs. However, they significantly overestimated the extent to which they may self-archive the published PDF version. The U.K. government has also come out in support recently for Open Access. According to the report entitled “Innovation and Strategy for Growth” issued by the Department for Business, Innovation and Skills (UKDBIS, 2011), it is expected that the Research Councils will provide the funds to enable authors to deposit published articles or conference proceedings in an Open Access repository at or around the time of publication. The report mentions that this practice is unevenly enforced. U.K. sociologist Daniel Allington (Allington, 2013) makes the point that Open Access is not a total solution to the scholarly communications problem. As a researcher he initially felt that Green Open Access was a good idea but now feels it is a drain on university budgets (institutional repositories are
Hybrid Open Access
355
not free to run). Gold Open Access makes life difficult for researchers who wish to publish in journals unless they are sufficiently wealthy to cover costs not taken on by the institution. It is also a breeding ground for ‘predatory publishing’. (“On Open Access, and why it’s not the answer” posted on 13 November 2013 – Allington, 2013) In his blog he described his own personal ‘institutional repository’ in which he deposits electronic copies of his own publications. He has some control, and some feedback, from running his own online service – “…the would-be reader cannot simply click on a button for a download [of one of his publications], but must submit a request (by filling out a web form) that I in turn must approve (by clicking a button). This in turn has the interesting side effect of keeping me informed on who is getting a free copy.” His point is that this system has many advantages, based as it is in the traditional printed reprint distribution culture, but it has been overlooked or ignored in the frenzy with which the two main Open Access processes have been pursued. Allington’s policy is akin to the Grey Open Access model described earlier.
Hybrid Open Access Under pressure from the growing band of devotees to the Open Access cause, several leading commercial STM journal publishers have adopted a halfway house approach in adapting to the new business models. Led by Oxford University Press and Springer S&BM, they provide room in their established subscription based journals for Gold Open Access articles. The editorial process for both types of articles is the same – the difference is that the Open Access articles would be financed by author payments and be available for free on the publisher’s web site, whereas the traditional toll-based articles would be subject to the authentication and authorisation procedures in the traditional way. However, an unattractive aspect of such ‘hybrid’ journal publishing is where the same journal issue may contain both ‘author paid’ and ‘subscription based’ articles and the payment becomes confused. It is difficult for end users to disentangle the two, notably those end users who are non-affiliated knowledge workers. It also raises problems for those agencies which supply individual articles under document delivery. For delivery of articles online, document delivery agencies are required to charge the traditional operational fee plus the royalty rate applicable for that journal title. However, when Gold Open Access (author paid) articles are included in a subscription title there is no way document deliv-
356 Chapter 25
Open Access
ery agencies have of disentangling those articles which are Gold (and therefore would bear no royalty charge) from those articles which have gone through the normal licensed paid route. As such it is conceivable that articles which should be delivered free from royalty charge by document delivery agencies would in fact bear the rate applicable for the host journal – giving publishers a double income or “double dipping” for the same article. This issue is about to be addressed by NISO with an attempt to get standard metadata applied to articles which will describe the access rights – whether ‘open’ or ‘toll-based’ – to each article. This will take some time to resolve, and in the meantime the ‘double dipping’ problem continues. Another onus on publishers is to make sure that the subscription price for the Hybrid Journal falls in line with the proportionate growth of Gold articles contained within the journal. There is as yet no systematic means for ensuring that such a reduction in subscription prices does occur. It is left up to the publisher to make the appropriate adjustments in the (subscription) year after publication, when the proportionate split is known. It could become a sensitive issue with complaints arising that adequate compliance is not being made. The example of “hybrid-journals” has created a degree of confusion in the minds of many researchers and authors. However, as far as the commercial publishers are concerned, they are still able to charge for APCs whilst still maintaining their subscription/‘Big Deal’ business models as much as possible.
Implications of Open Access Gratis Open Access’ involves the removal of price barriers alone (‘weak’ Open Access) and ‘libre Open Access’ involves the removal of price and at least some permission barriers (‘strong’ Open Access). A typical funder or university mandate requires gratis Open Access. Many Open Access journals provide libre Open Access, but many others provide only gratis Open Access. There is therefore more than one degree of Open Access. Peter Suber (Harvard University’s Office for Scholarly Communications) described the hierarchy of desired objectives. For example, he says that “libre” Open Access is better than “gratis” Open Access, but he also points out that gratis is better than nothing. He also points out that while green is often limited to gratis, gold is not a guarantee of libre either. In short, green and gold relate in complex ways to the desired objective of achieving Open Access. The uncertain relationships between ideal ends and availability militate in favour of sheer pragmatism being adopted (Suber, 2012).
Implications of Open Access
357
Gold (and to a certain extent Hybrid Open Access) are distorting the traditional market structure for scientific communication and publishing. According to a report by Outsell, the entire STM publishing industry generated $24.5 billion in 2012. Of this, $6 billion came from the journals programme (Outsell, 2013). Within this, Open Access revenues amounted to $0.172 billion or 2.2% of the journals programme receipts. So the commercial case for Gold/Hybrid Open Access still needs to be proven. However, the relative growth rates in STM industry overall, the journals programme and Open Access show vastly different patterns – STM grew 4.1% to 2012; journals grew 2.1% but Open Access (from an admittedly low base) grew by 34%. The breakdown of the $172 million from Open Access into Gold and Hybrid components was as follows: Gold Open Access journals Hybrid journals ‘Institutional’ Green Open Access arrangements
$,, $,, $,,
Commercially, the above Gold/Hybrid revenues generate 2.2% of the journal revenues; however in terms of physical output (editorial), Gold Open Access overall makes up for between 10% and 12% of article availability. If the Green Open Access content is also included the total is increased to 20% to 24% of total STM journal article output. The above pattern shows the increasing ‘openness’ of the scientific publishing industry, but at the expense of the profitability from traditional journal publishing. OA is likely to be further advanced as ‘mandates’ by institutions and funding agencies come into play. How will Open Access models affect the business of scholarly publishing? In many cases, the cost burden appears to be shifting from library budgets to authors and their research grants; so while the revenue stream may come from a different bucket, it is still largely paid for by scientists, their institutions, or the government. While Open Access experiments are certainly a growing trend among commercial publishers, it also remains to be seen how this will affect the quality of information now available without a pay wall. We may have reached a tipping point for new initiatives, but are authors voting with their feet by abandoning traditional subscription journals and publishing business models? Or more to the point, will tenure and promotion processes evolve to embrace Open Access? Equally will the business model underlying Open Access prove as profitable as the highly successful subscription business, and if not what does that mean for the viability of the scientific publishers in future? And,
358 Chapter 25
Open Access
finally, will Open Access in all its variations prove to be a boon for UKWs in their objective of becoming more involved in the creation and use of scientific research results in future?
Stakeholders’ Perception of Open Access Researchers’ Perception of Open Access Individual researchers appear not to be convinced of the merits of Open Access.. At the root of the issue is the monomaniacal intensity that ambitious scientists must bring to the pursuit of accredited scientific publications and grants. This arises in part out of the intense competition for scientific jobs. For example, 1,300 people earn physics PhDs from U.S. universities but only 300 faculty positions in physics open up each year. In such a competitive environment, working 80 plus hours per week is common. Scientists who already have tenured positions continue to need grant support, which requires a strong work ethic still focused on producing papers – the basis on which their productivity and effectiveness is still being judged. The ‘supply-side’ of research article provision is therefore unproven. On the ‘demand-side’ of the equation, the main barrier to access research publications (for more than 70% of researchers) identified by CIBER (CIBER, 2011) is an unwillingness to pay for an article at the prices currently being quoted. This is followed by the lack of a hard copy in the library or the unavailability of the article in a digital format, particularly of older material. Differences in responses between four sectors – university and colleges, industry, medical and research institutes – are insignificant, except that university and college researchers are much more likely than others to report problems accessing material from home or not being able to find a physical copy in their library. Knowledge workers in industry are much more likely to report technical problems paying for an article once they find it. Nevertheless, the CIBER analysis also produced findings to suggest that pay-per-view business models could still play an important part in addressing gaps and barriers. As well as quantifying the extent to which researchers are already using these services, it reveals considerable latent demand (those that have not yet ‘but might in the future’ use these services). In terms of subject or discipline, researchers in the relatively well-funded areas of the health and life sciences and pharmacology and toxicology are the most likely to be current pay-per-view customers.
Stakeholders’ Perception of Open Access
359
Publishers’ Resistance to Open Access Fearing that the trend towards Open Access would have detrimental effects on their lucrative journal subscription businesses, the American Association of Publishers invited Eric Dezenhall, a.k.a. ‘the pit bull’ from his federal lobbying activities, to advise publishers how to cope with the trends towards Green Open Access. Dezenhall’s message was to keep the arguments simple – that “Public access equals government censorship”, and that publishers should point out what the scientific world would look like without refereed journals. The American Association of Publishers then launched PRISM – Partnership for Research Integrity in Science and Medicine. This was an advocacy group challenging the role which the National Institutes of Health in particular was playing in promoting Open Access ‘at the expense of the viability of learned (commercial) journals.’ This has been followed by attempts to get Congress to pass legislation which outlaws Open Access by federal funding agencies. This backfired insofar as the publisher support for the Research Works Act (RWA) resulted in a major campaign against the publishers, in particular against the lead publisher Elsevier, as it was felt that they were the instigators in trying to stop Open Access mandates being put in place by the dozen or so major federal R&D agencies. Proposals to enact the Research Works Act have been dropped. This will be explored further in the next chapter on Political Initiatives (see chapter 26). In Europe the focus has been on a a major study of the impact of institutional repositories on journal subscriptions. This project was entitled PEER (Publishing and the Ecology of European Research) and was a three year project supported by the EC eContentplus programme aimed at improving understanding of the effects of the large scale deposit of stage two (accepted) manuscripts in open access repositories (Green Open Access). Several reports from this project suggested a complex relationship, one not necessarily proving that IRs undermined the commercial existence of STM journals.
Author Resistance to Open Access In 2012–2014 the Sage Publishing group undertook surveys of researchers’ view on Open Access (Harris, 2012). The main focus was on how researchers as authors made their articles available for access. Nearly 90,000 researchers worldwide were approached with 9% responding to a questionnaire, which was a repeat from one undertaken a year earlier. There is an emphasis more on the social sciences as a result of Sage’s strengths in these areas, and their weakness in biomedicine in
360 Chapter 25
Open Access
particular. From these two studies it appears that researchers are more ambivalent than excited about Open Access. There is an attraction which comes from wider circulation – over 80% felt this – and offering higher visibility is also a strength from Open Access (over 60% respondents). Whilst they accept their published articles being used for non-commercial gain, there was strong antipathy against their use for commercial purposes – two thirds felt this way. In terms of licence, the preferred choices were Creative Commons BY-NC-ND and giving an exclusive licence to publish to the publisher. Quality and production issues are not significant inducements for making their articles Open Access. The respondents did make use of repositories to access material, and repository content is judged as being as equally useful as the publishers’ version of record. Of the respondents who had deposited their articles in Open Access, 23% did so in an institutional repository (Green), 23% deposited in a personal or departmental website (Grey), 12% deposited in a subject-based repository, 8% in a data repository, but 52% had made no such repository deposits. The most important reason for making deposits in a repository was ‘a personal responsibility to make their work freely available.’ Another important stimulus was to meet other researcher’s need to get access to the article. A lesser stimulus was a requirement by the institution to make such deposits. The main reason for not uploading their article into a repository is lack of clarity about legal position arising from the publisher’s policy on making their articles Open Access. A further restraint seems to be lack of time to engage with the repository on making the manuscript suitably formatted. Despite the pressures coming from agencies to have their articles included in repositories, 90% felt that academic papers would continue to be the main outputs of research. A few respondents commented on alternative systems which might arise, with interactive multimedia resources, blogs and ‘unspecified ways’ figuring the most popular. Continued support for Open Access was also envisaged, as was an increased range of online only journals. Despite these, academic journals were seen as remaining the principle publication outlet for quality research (68%), but only 17% saw a dual closed and open system prevailing. This goes against much of what has been said earlier about the dysfunctionality of the existing subscription system, with the difference being the precise approach taken in the Sage survey from the generic views expressed by a number of industry experts and pundits – a difference between a micro and macro views. (see Harris, 2012). Democratic forces have been latent in scientific communication, and in the eyes of many, to be irrelevant to the process in which a sense of elitism has pre-
Stakeholders’ Perception of Open Access
361
vailed. Citation counts and other metrics have reinforced the situation where esteem is conferred on those who have done good work. As such competitiveness has been part of the system. Every researcher – seeking to gain recognition and more tangibly additional research grants or tenure – competes with others to scale the ladder of achievement as measured by their inclusion within the journal citation index. Authorship represents a strong bastion for the subscription-based publication system as the rewards it confers are evident.
Librarians’ Attitudes to Open Access Meanwhile, librarians have supported the Open Access movement for a variety of reasons, some of them more rational than others. The emotional support comes from the frustration of feeling economically powerless in the face of ever escalating subscription and licensing prices and the impression that “the publishers are mercenary villains” who have a dual mission of gouging library budgets and attempting to prevent people from getting to scientific content (see comments by George Monbiot and others in chapter 8). In spite of this apparent concern over the profitability levels achieved by the ‘bad guys’ in commercial journal publishing, it appears that PLoS made a 20% margin in 2010, and if the trends continue, could conceivably surpass Elsevier’s 30% plus margin for 2011. Springer claims “double-digit” profits from BioMed Central. What are those librarians who have been gnashing their teeth for years over the predatory, irresponsible, evil pricing policies of the commercial publishers to make of this? It means: – If one believes that publishers add no value, one cannot support PLoS any more than one supports Elsevier. – If one believes that commercial publishers are the bane, then one should be as opposed to BioMed Central as one is to Elsevier. – If one believes that “excess profits” are the problem, then one needs to recognise that Open Access is not the solution and be as wary of the successful gold and hybrid publishers as one is of prepayment of subscriptions. These issues about ‘corporate villains’ in scientific communication are complex and intertwined. At the crux is the adoption of an appropriate and acceptable business model, particularly if the industry is seeking to include the wider group of knowledge workers within its reach. In the meantime there is a groundswell of support for the library to take a proactive role in developing Open Access. There has been comment in various listservs that the librarian could promote their institutional repository as a
362 Chapter 25
Open Access
resource in the supply of published research reports at the expense of, or in competition with, data silos offered by publishers. By the same token, libraries now have opportunities to assist their faculty members in creating and providing scholarly content directly to users. One such endeavour is Open Journal Systems (OJS), a software system developed by the Public Knowledge Project (PKP), and created specifically to facilitate Open Access scholarly publishing.
Learned Societies and Open Access How many learned societies publish Open Access journals? In a publication produced by a leading advocate for Open Access – Peter Suber – in September 2013, he identified 832 societies publishing 780 full or non-hybrid Open Access journals. For comparison, an earlier edition (December 2011) found 530 societies publishing 616 full Open Access journals, and the first edition (November 2007) found 425 societies publishing 450 full Open Access journals. (The first edition of the list included hybrid Open Access journals, the two later editions did not.) Of the 780 Open Access journals published by societies, a majority – 451 or 58% – charged no publication or submission fees. The breakdown of journals by field was as follows: – 631 (81%) in science, technology, engineering, or medicine – 84 (11%) in the social sciences – 49 (6%) in the humanities – 5 (0.6%) in the arts – 11 (1%) in more than one field (multiple Library of Congress subject categories apply). The chapter on Learned Societies (chapter 23) indicates that Open Access offers a route whereby they could develop new services to meet the needs of their membership. New multimedia information services, combining formal and informal information sources, and customising information according to specific end user profiles, offers a vibrant, innovative and healthy approach to meeting needs of UKWs in future.
Implications for UKWs The argument in favour of introducing more Open Access is compelling, not only for the unaffiliated sectors. In theory, it offers greater dissemination of research results throughout more segments of society, and has the potential for
Implications for UKWs
363
stimulating new research and ideas from sources which were hitherto denied to researchers. It also supports the many opportunities which arise from the growing trend towards interdisciplinary research, collaborative research, workflow processes and multi-format work. “Public funded research should be publicly available.” This is the central argument of the various Open Access movements. It is particularly a stimulus for the Unaffiliated Knowledge Workers to be able to access all the research information they may want or need. In some respects it may be seen to be the answer to the problem facing knowledge workers – that Open Access gives them the means to access research publications hitherto denied them because of the subscription and licensing business models which were in vogue in the past. But does it really answer the problem? Has Open Access succeeded in enfranchising the global research community? Much depends on the extent of political support there is for mandating a change in the way scholarly communication takes place. The political issues involved in both the U.S.A. and the U.K. will be described in the next chapter.
Chapter 26 Political Initiatives Introduction Given the speed with which open access movements have been embraced it is inevitable that questions have been asked within governments about the strategic role of openness in the communication of scientific research outputs. There is a political dimension as governments seek to ensure that efficiency of research activities within the country is optimised, and that the country remains competitive in terms of innovation with other countries. As reported by Eric van de Velde in his blog entitled ‘Open Access Politics’ on March 26, 2013: As this new phase of the OA movement unfolds on the national political stage, all sides will use their influence and try to re-shape the initial policies to further their respective agendas. The outcome of this political game is far from certain. Worse, the outcome may not be settled for years, as these kinds of policies are easily reversed without significant voter backlash.
Questions have been raised in political circles within the U.K., Germany, the U.S.A., Australia and Europe in particular during the past decade, with strikingly different decisions being made as to which open access route should be followed. For example, there has been a difference in the position of the U.K. compared with the rest of the world on the use of Green versus Gold Open Access, and institutional repositories, as the backbone of research communications. Many pundits feel that U.K. government support for the Gold open access movement is a gift to the existing publishing industry.
UK Developments The Finch Report In 2011 a ‘Working Group on Expanding Access to Published Research Findings’ was set up to examine how U.K.-funded research results could be made more accessible (RIN, 2012). It was commissioned by the Rt Hon David Willetts MP, Minister for Universities and Science, and made up of representatives from the higher education (HE) sectors, research funders, the research community, scientific publishers, and libraries. It was chaired by Dame Janet Finch DBE, Professor of Sociology at Manchester University and independent co-Chair of the Council
UK Developments
365
for Science and Technology. The group proposed a programme of actions on how access to research outcomes could be broadened for key audiences such as researchers, policy makers and – critically as far as this book is concerned – the knowledge workers and the general public. Focusing on academic publications – specifically journal articles, conference proceedings and monographs – the working group took into account work relating to research data and other outputs being investigated by organisations such as the Royal Society. It adopted an evidence-based approach. It sought to identify the key goals and guiding principles that should underlie public policy on publication of, and access to, research findings. It took account of relevant policies and practices in other countries. In its deliberations it considered (i) greater take-up of open-access publishing, (ii) open-access repositories and (iii) development of a national licensing scheme. The group’s work was supported by the Department for Business Innovation and Skills (UKDBIS); the Publishers Association (PA); Research Councils U.K (RCUK).; and the Higher Education Funding Council for England (HEFCE). The Research Information Network acted as the group’s secretariat. The report was published in June 2012, and essentially recommended that an open access agenda be followed in the U.K. which adopted the Gold Open Access policies. The role of Green Open Access, notably the activities of institutional repositories, was down-played. The committee’s findings released a tidal wave of comments, ranging from broadly supportive (by the commercial journal publishing industry, which liked the focus on making Gold Open Access the preferred option) to highly critical (from the Green Open Access school who saw the committee having been high-jacked by the commercial interests of publishers and therefore not supportive of the broader interests of the scientific community). Subsequent policy statements by groups such as Research Council U.K. followed the outlines set out in the Finch report, but with some changes which in effect muddied the waters and did little to produce a clear and transparent U.K. national policy on open access. In essence, the Finch report embraced the transition to (Gold) open access as the preferred option, and recommended making available the necessary funds to enable authors to pay for their manuscripts to be published. The committee also advocated extending the provisions of site licensing to wider groups during the transition from subscription to OA. The role of repositories, whether institutional or subject-based, was seen as offering complementary services in such areas as enabling access to raw data, grey literature and in digital preservation. No definitive ruling was offered on the embargo periods which would accompany such services.
366 Chapter 26
Political Initiatives
The committee estimated that the transition to open access would cost U.K. higher education an extra £50–£60 million a year made up as follows: – £28m on OA publishing costs (i.e. publishing in fully OA and hybrid journals) – Reductions in subscription costs to compensate for the new APC income would only occur in the subsequent subscription year. £10m was estimated as the cost of such “stickiness” (i.e. though subscriptions costs are expected to fall as the amount of content published via gold increases increase, there will therefore be some ‘stickiness’ in subsequent subscription charges as well.) – £10m on extensions to licences (i.e. providing better access to e.g. health sector – for content not authored by U.K. researchers) – £3m–£5m on repositories (some of this money may already be in the system in the form of funding e.g. from the JISC) – £5m other transition costs The biggest component of this was the £28m for OA publishing costs or APCs. To arrive at this figure, the report made a number of assumptions: – APC’s would be 20% higher than the “central case” (i.e. £1,740, rather than £1,450. The average APC paid by the Welcome Trust in the first quarter of 2011 was £1,422) – U.K. gold uptake would be 50% of all U.K. authored articles (i.e. 61,797 of the 123,594 articles published per year by U.K. authors, would be routed via gold) – Rest of the world uptake of gold would be 25% (i.e. the RoW moves more slowly in favour of gold) – U.K. pays for 75% of articles containing U.K. authors. What the Finch report has done is to raise the debate about the role of existing journal publishers, and the profit motive which underlies much of their activities in giving ‘glory’ to authors over providing effective communication channels. As such, there is little consensus behind the Finch report. In an interview conducted by Richard Poynder with Professor David Price, Vice-Provost (Research) at University College London, the latter made the case for a more balanced approach to open access, one which did not focus so heavily on Gold OA. According to Professor Price: Economic modelling shows that, for research universities, the Green route to OA is more cost effective than Gold. Under Gold, Research Councils and Universities will have to find millions of pounds from existing budgets to fund OA charges. That means that some things will have to stop to make the necessary monies available.
Issues arising from Finch
367
It should be pointed out that UCL has its own mandated institutional repository and therefore there is a slight vested interest. More significantly across all disciplines, Professor Price claims, The result of the Finch recommendations would be to cripple university systems with extra expense. Finch is certainly a cure to the problem of access, but is it not a cure which is actually worse than the complaint? What Finch should have done is to model Green and Gold together, to see which works out cheaper.
David Price’s message to the U.K. Minister for Universities and Science David Willetts is : “…Carry on talking to get the best transitional model from where we are now to a fully OA world. The Finch recommendations are only part of the answer.” Such an approach would be supported by those in the library world, in IRs and researchers who believe that the Finch report was effectively a short term highjacking by the publishing industry.
Issues arising from Finch In a report published by professor John Houghton and Dr Alma Swan on a similar economic model, the impression gained was that there was something missing in the Finch assessment. Their assessment is “For universities, at the present time the most cost effective route is for a University to opt for Green OA. Should the whole world turn OA, then their modelling supports Finch, in that the biggest saving for a University would come from Gold.” The key issue being that Gold requires a global transition to be effective, and is a longterm answer to a shortterm problem. Along the way there is a strong role for Green/institutional repositories. There are a number of other queries which have been raised about the implementation of the Finch report. These include: – Mechanism. How will the allocation of funds for gold reports be handled within the university system. What mechanism will be established, and who will decide on the allocation and how transparent will this be? In some instances the best journal for a researcher in which to publish their results may be a foreign one – how much freedom of choice will be given to enable the individual to make their own optimal decision on where to publish? This also applies to articles which are the result of multi-author contributions – what impact will Gold articles have in deciding where they get published. Also, what of authors who are retired, and more relevant for this study, what of the non-affiliated authors? Where will the gold support funds come from for this group? Then there is the question of how to avoid the
368 Chapter 26
–
–
–
–
Political Initiatives
same spiralling costs of publication of gold articles which has bedevilled the subscription based articles. An imperfect market still prevails. Stakeholders. Another set of questions relate to learned societies. Do they have a wider role in academic engagement? The journal publishing programme in many cases currently supports the rest of their membership activities. There is the need to engage with learned societies. The issue of commercial publishers also has to be considered. Inevitably commercial publishers will defend themselves from the advocates of the Green Open Access movement. Some of them have indeed adopted open access within their business policies. Some have also been innovative in the introduction of new products and services. Whilst this is positive, there are now concerns being expressed about the potential for ‘double dipping’ when hybrid journals are introduced by established journal publishers. There are no clear procedures for ensuring that as more and more articles are incorporated into the journal that the subscription prices are reduced in subsequent years. The easiest way – given the lack of transparency – is for APCs to supplement the income received from subscriptions and licences rather than be an alternative. At the same time as questions are raised about how publishers will be affected by the move towards Gold open access, there are also questions about how research libraries will adapt. One of these is how sustainable is the alternative open access model – Green – and the institutional repositories which many are implementing. There are costs involved in creating and maintaining an inhouse digital repository. Another is how far research libraries will migrate into supporting a university publication system – whether they will provide more guidance to the faculty on how and where to publish the output of their research effort. University College London is one institution which feels it has a publishing role in the digital world, and in June 2015 launched its own UCL University Press, driven by the university library. Is this indicative of a trend among other research institutions and libraries? Discovery. Another set of questions relate to the discovery process. Whilst there are mechanisms provided by the major research engines to identify the existence of IR deposited material, the existence of embargoes may thwart the end user’s ability to read the article on demand. The publishers are able to set copyright conditions which prevent early exposure. There is no consistency on access conditions. Data. There are also questions about data and datasets. There is a Royal Society report which tackles some of these issues. The Hargreaves report on information property also looks at some of the copyright implications. It is
Issues arising from Finch
–
–
369
generally agreed that data created as a result of public funding should be open access. There is also a big issue about text and data mining which requires scale in order to be effective. Though some of the funding agencies demand text and data mining facilities from their funding, this has met with resistance from publishers who are concerned about the rise of secondary and derivative publications over which they have no control. Implementation. The big question hanging over Finch is in its implementation. There is disappointment with the plans for bringing Gold open access into fruition. There is no road map. There has been impetus given by the report but no follow-up envisaged. The changing ecology has to be taken into account.
A critique of the Finch report was included in the Times Higher Education Supplement, 6 December 2012. The article included the following: It is not just the U.S. and the social sciences that will not join the U.K.’s Gold Rush. Neither will Europe, nor Australia, nor the developing world. The reason is simple: The Finch/ RCU.K./BIS policy was not thought through. It was hastily and carelessly cobbled together without proper representation from the most important stake-holders: researchers and their institutions, the providers of the research to which access is to be opened. Instead, Finch/RCU.K./BIS heeded the lobbying from the U.K.’s sizeable research publishing industry, including both subscription publishers and Gold OA publishers, as well as from a private biomedical research funder that was rather too sure of its own OA strategy (even though that strategy has not so far been very successful). BIS was also rather simplistic about the “industrial applications” potential of its 6% of world research output, not realising that unilateral OA from one country is of limited usefulness, and a globally scaleable OA policy requires some global thinking and consultation. [What is required is to] adopt an effective mechanism to ensure compliance with the mandate to self-archive in U.K. institutional repositories (Green OA), in collaboration with U.K. institutions. And scale down the Gold OA to just the affordable minimum for which there is a genuine demand, instead of trying to force it down the throats of all U.K. researchers in place of cost-free self-archiving. The U.K. institutional repositories are already there: ready, waiting – and empty.
Another recent publication, which also acknowledges the difficulties of transition, is the LERU Roadmap Towards Open Access (LERU, 2011). This document was published in June 2011 by the League of European Research Universities as an Advice paper for all European Universities. LERU recommends mandating Green OA, today, and funding Gold only when Green OA has not been mandated. Most pundits agree that universal Gold OA publishing will be cheaper than today’s subscription publishing model – but certainly not if today’s prices are locked into the mechanism of transition to Gold OA and effectively establishing publishers’ total revenue at the level of their subscription revenues.
370 Chapter 26
Political Initiatives
RCUK According to a press release on 16 July 2012 in response to the Finch report, science papers must be made free to access within six months of publication if they come from work paid for by one of the United Kingdom’s seven government-funded grant agencies, the research councils. Together the RCUK spends about £2.8 billion each year on research. They are endorsing a six-month maximum delay and have announced how they will take money out of research grants to pay for open access. This will be in addition to the open access policy being pursued by the Welcome Trust which spends £600 million each year on biomedical research. For ‘Gold’ open access, RCUK will pay institutions an annual block grant to support the charges. If government does not give RCUK any more cash, the money required will come from existing grant funding. It has been estimated at being some 1–1.5% of research budgets. In turn, RCUK expects that institutions will set up and manage their own publication funds. That might mean that universities and researchers will begin to discuss where they can afford to publish. RCUK has not said how it will sanction those authors who do not comply but informally they are aiming for 75% compliance. If it does rigorously enforce the policy it will mark a dramatic shift for scientists, publishers and universities. Prepaid gold papers must also carry a liberal publishing licence (Creative Commons CC-BY), making the work free to text-mine or otherwise reuse. This would be contrary to the wishes of journal publishers. Finally, two research councils – the Arts and Humanities Research Council (AHRC) and the Economic and Social Research Council (ESRC) – will initially require papers to be made free only after 12 months. But that is only envisaged as being a transitional arrangement and reflects the different nature of publishing in these areas of scholarship. As far as research librarians are concerned, the aim of this move is to eliminate a percentage from subscription budgets, which leaves librarians with less to manage, as well as patrons who have fled to their desktops. While the estimated savings in subscription costs is put at £200 million, the salaries of university personnel and others which will also be eliminated could be sizable as well. With the RCUK hoping to set the tone for the entire European Union when it comes to OA policy, the effects could cut even deeper into library budgets and staffing.
House of Lords Inquiry On Fri, Feb 22, 2013, the House of Lords Science & Technology Committee published its report on implementation of the U.K. government’s Open Access policy
HEFCE and REF
371
(UK Parliament, 2013). According to Lord Krebs, Chairman of the House of Lords Science and Technology Committee: RCUK did not consult nor communicate effectively with key stakeholders in the publishing and academic communities when implementing its open access policy. There are still many unknowns concerning the impact of the open access policy, which is why RCUK must commit to a wide ranging review of its policy in 2014, 2016 and before it expects full compliance in 2018. We heard significant concern about the policy’s ‘one-size-fits-all’ approach, and are pleased that RCUK are both aware of these concerns and prepared to act on them. Open access is an inexorable trend. The Government must ensure that in further developing our capabilities to share research they do not inadvertently damage the ‘complex ecosystem’ of research communication in the U.K.
The House of Lords also recommended research be undertaken on whether other countries are mandating Gold OA or Green OA. However it appears that other countries are indeed mandating Green, and not funding or preferring Gold, as the RCUK is proposing to do. The Lords also recommend looking into discipline differences. According to professor Stevan Harnad on a listserv “The remedies for the flaws in the proposed new RCUK policy need to be attended to promptly now, otherwise the U.K. will be the odd man out in the worldwide movement toward OA, instead of the leader it has formerly been”. Some re-thinking of Finch and RCUK’s OA policy is taking place. Prior to a more complete review of its policy in 2014, RCUK issued a revision of its Open Access policy on 6 March 2013. The major change was that there is now an explicit statement that although RCUK prefers gold, either green or gold is acceptable. The Department for Business, Innovation and Skills (UKDBIS) has also launched an inquiry into open access which has yet to report.
HEFCE and REF In March/April 2014, the Higher Education Funding Council for England (HEFCE) published details of a new policy for open access relating to future research assessments after the latest 2014 RAE expired. The policy describes new eligibility requirements for outputs submitted to the post-2014 REF. These requirements apply to all journal articles and conference proceedings accepted for publication after 1 April 2016. They do not apply to monographs, other long-form publications, creative or non-text outputs, or data. The requirements state that peer-reviewed manuscripts must be deposited in an institutional or subject repository on acceptance of publication. The title and author of these deposits, and other descriptive information, must be discoverable
372 Chapter 26
Political Initiatives
straight away by anyone with access to a search engine. The manuscripts must then be accessible for anyone to read and download once any embargo period has elapsed. There are limited exceptions to the policy, where depositing and arranging access to the manuscript is not achievable. Wherever possible, authors’ final peer-reviewed manuscripts must be made freely available in an institutional or subject repository within Research Councils U.K.’s stated embargo limits of 12 months for science articles and 24 months for others (falling to half that length after a five-year transition period). However, REF-compliant authors will still be permitted to publish in journals that do not permit open access within those periods provided the journal is “the most appropriate publication for the output”. In such cases, the papers must merely be made open access “as soon as possible”. Some journals, particularly in the humanities, have complained that complying with RCUK embargoes would see them lose subscribers and imperil their viability. However, journals that do not comply with those embargo limits will continue to be unavailable to research council-funded authors.
Developments in the U.S.A. FASTR The Fair Access to Science and Technology Research (FASTR) Act is the successor to the Federal Research Public Access Act (FRPAA). FRPAA was introduced in three earlier sessions of Congress (May 2006, April 2009, and February 2012) but had not come up for a vote. In the 113th Congress, Congressional supporters of OA decided to introduce a modified bill. The result is FASTR, a new and strengthened version of FRPAA. Both bills required open access (OA) to peer-reviewed manuscripts of articles reporting the results of federally-funded research, and that these be deposited in institutional or subject based repositories. FASTR was introduced in the Senate by John Cornyn (R-TX) and Ron Wyden (D-OR). It covers those federal agencies spending more that $100 million per annum on extramural research. It mandates green OA (through repositories), and is silent on gold OA (through journals). FASTR, like its predecessor, requires deposit of the final version of the author’s peer-reviewed manuscript. It also allows consenting publishers to replace that version with the published version. FASTR gives agencies freedom to designate a suitable repository for the mandatory deposits, when suitability includes “free public access, interoperability, and long-term preservation”. Agencies may host their own repositories, the way NIH hosts PubMed Central, or ask grantees to deposit in suitable institutional
White House Memorandum
373
or disciplinary repositories. It asks for OA “as soon as practicable” after publication in a peer-reviewed journal, and requires it “no later than 6 months” after publication. However, it exempts classified research, unpublished research, royalty-producing research such as books, and patentable discoveries. It is explicit in not requiring an amendment to the copyright law or patent law. Because FASTR applies to more than a dozen federal agencies, according to Peter Suber: FASTR would mandate OA for more research literature than any other policy ever adopted or ever proposed. It would significantly increase both the corpus of OA literature and the worldwide momentum for funder OA mandates. It would come as close as any single step could to changing the default for the way we disseminate new scientific work, especially publicly-funded work. (http://www.earlham.edu/~peters/fos/newsletter/08-0209.htm#frpaa)
The NIH budget alone is more than six times larger than the budgets of all seven of the U.K. research councils put together. Hence, it is significant that FASTR disregards the gold-oriented RCUK/Finch policy in the U.K., and sticks to the FRPAA model of a green mandate.
White House Memorandum According to Dr. John Holdren, Assistant to the President for Science and Technology and Director of the White House Office of Science and Technology Policy, the Obama Administration agrees that citizens deserve easy access to the results of research their tax dollars have paid for. This became part of a White House Memorandum on open access. The White House policy memorandum also follows that of the FASTR bill closely. However, in addition to addressing the issue of public access to scientific publications, the memorandum also requires that agencies improve the management and sharing of scientific data produced with federal funding. “Access to pre-existing data sets can accelerate growth by allowing companies to focus resources and efforts on understanding and fully exploiting discoveries instead of repeating basic, pre-competitive work already documented elsewhere”. For example, making human genome sequences publicly available has spawned many biomedical innovations – not to mention many companies generating billions of dollars in revenues and the jobs that go with them. Going forward, it is claimed that wider availability of scientific data will create innovative economic markets for services related to data curation, preservation, analysis, and visualization, among others.
374 Chapter 26
Political Initiatives
The policy (http://www.whitehouse.gov/sites/default/files/microsites/ostp/ ostp_public_access_memo_2013.pdf) reflects inputs from scientists and scientific organizations, publishers, members of Congress, and other members of the public – over 65,000 of whom signed a We the People petition asking for expanded public access to the results of taxpayer-funded research. FASTR is stronger than the White House memorandum by requiring shorter embargo periods and using stronger, clearer language on open licensing and reuse. Nevertheless, the two approaches are complementary. The Obama directive is already in effect, but could be rescinded by the next president; FASTR has not yet been adopted but could entrench federal OA mandates for the long term.
The Research Works Act Prior to the White House memorandum, and FASTR, there had been an attempt to reduce rather than expand open access of federal funded research output in the U.S., as sponsored by the U.S. science publishing community. In December 2011 the US House of Representatives put forward the Research Works Act HR 3699 (RWA) which supported the publishing industry’s case for less involvement by federal agencies in demanding compliance with green access submissions. It split the stakeholders into competing camps, and even within the publishing industry itself several leading companies distanced themselves from the RWA. In effect, in three short paragraphs, it claimed that the National Institutes of Health’s public access mandate was unlawful. Those promoting RWA claimed that Congress had already adequately addressed the question of public access to federally funded research through Section 103 of the America Competes Act, which does not establish any actual public access policies, but rather called for an Interagency Working Group to discuss priorities for federal agencies considering such policies The backlash against the RWA and the STM publishing community has been intense. The general opinion was that this strident policy was ill-conceived and not needed. It was also unlikely to get passed. In speaking against the RWA, Dr Stuart Shieber, Director of the Office for Scientific Communication at Harvard University, argued that open access to research is an intrinsic public good. He quoted Thomas Jefferson, noting “the most important bill in our whole code is that for the diffusion of knowledge among the people.” Shieber suggested that the traditional publishing market is dysfunctional – library budgets for serials continue to shrink while journal profit margins increase. He also
CHORUS
375
referred to the growing body of research demonstrating the economic growth occurs from increased innovations from openly accessible research.
CHORUS Given the poor response by segments of the publishing industry, and to universal disdain from the rest of the scientific information community, publishers have offered another suggestion. In June 2013 the Association of American Publishers (AAP), a group of primarily subscription based journal publishers, came up with CHORUS (Clearing House for the Open Research of the United States). It is a proposed system to search for federal funded science articles but only through accessing each publisher’s web site. The Association of American Publishers (AAP) has put forward its bid for a coalition of publishers to handle many of the requirements outlined in the federal proposals requiring open access to federally funded research. CHORUS plans to “work out the system architecture and technical specifications over the summer and have an initial proof of concept completed by August 30.” (2013) Publishers have offered to cover the costs for implementing CHORUS. CrossRef and its newly launched FundRef service, which ties published papers to grants that fund them, would provide much of the infrastructure. Given that the publishers were already planning to include FundRef information and participating in dark archives such as CLOCKSS, LOCKSS, and PORTICO, “making a version of the paper available to the public, that’s almost a software triviality,” claimed an AAP spokesperson. Also, building an integrated interface and APIs are tasks with which publishers have much experience. It is claimed that CHORUS will mean a substantial saving in time and trouble for federal agencies, and mean less displacement of funds for setting up a federal-initiated infrastructure. It will therefore mean more funding for research. It will also allow publishers to track data about article usage. However, CHORUS does not yet address the data or text mining portions of the federal requirement. Others in the scholarly community have suggested more sinister motives. “Given that the AAP clearly thinks that public access policies are bad for their businesses, they would have a strong incentive to make their implementation of a public access policy as difficult to use and as functionless as possible in order to drive down usage and make the policies appear to be a failure,” from PLOS co-founder Michael Eisen. A tactic which could be used is what has been called ‘soft technological protection measures.’ This means deliberately heightening user annoyance, through including such irritants as disabling printing and
376 Chapter 26
Political Initiatives
downloading, using poor search algorithms, turning away web search engines, etc. The aim, of course, is making the open-access materials a poorer substitute for what libraries buy through subscriptions, ‘Big Deals’ and licences. However, over 90 publishers, societies and other organisations are now giving support for CHORUS as reflected in achieving signatories as in January 2014.
SHARE Three days after CHORUS a proposal called SHARE was released from a group of universities and libraries. SHARE (the SHared Access Research Ecosystem) involves a cross-institutional digital repositories being established. In particular the plan is to federate existing university-based digital repositories, obviating the need for central repositories. The SHARE system would draw on the metadata and repositories already in place in the institutional community, such as using ORCID numbers to identify researchers. There would be a requirement that all items added to the system include the correct metadata such as: the award identifier, Principal Investigator number and the repository in which it sits. In the SHARE proposal, existing repositories, including subject based repositories, would work together to ensure metadata matching to become a ‘linked node’ in the system. A significant challenge in the proposal is the affirmation that for the White House policy to succeed, federal agencies will need universities to require of their Principal Investigators “sufficient copyright licensing to enable permanent archiving, access, and reuse of publication”. While sounding simple, in practice this meant altering university open access and intellectual property policies, and running a substantial educational campaign amongst researchers. This would be no small feat. The timeframe the SHARE proposal puts forward is in phases, with requirement and capabilities developed within 12–18 months, and the supporting software completed within another six months. There is a two-year minimum period after initiation before the system would be operational.. Meanwhile, it was announced (February 2014) that the Association of Research Libraries (ARL) has been awarded a joint $1 million grant from the Institute of Museum and Library Services (IMLS) and the Alfred P. Sloan Foundation to develop and launch the SHared Access Research Ecosystem (SHARE) Notification Service. This aims to make research assets more discoverable and accessible. It will inform members when research results – including articles and data – are released. A prototype of the service is scheduled for deployment by late summer 2014, a beta
Implications for Knowledge Workers
377
release incorporating community feedback is targeted for Autumn 2014, and the full release is expected in Autumn 2015. So which of the above projects will win through? Despite the several proposals emerging within months of one another, the sophistication of all US proposals indicates that they have been in development from some time. Indeed, the CHORUS proposal would have required lead-time to negotiate ‘buy-in’ from the different publishers. On the other hand, the SHARE proposal includes a complex flow chart which appears to be the equivalent of ‘High-level System Architecture’. The CHORUS proposal states that it would be ready by Friday 14 June, 2013. According to a post on the LibLicense discussion list, SHARE was developed without awareness of CHORUS, so it is not an intentional ‘counterattack’. There are glaring differences between the CHORUS and SHARE proposals. SHARE envisions text and data mining as part of the system, two capabilities missing from the CHORUS proposal. SHARE also provides searching through Google rather than requiring the user to go to the CHORUS system to find materials as the latter seems to be proposing. But as Peter Suber points out: “CHORUS sweetens the deal by proposing OA to the published versions of articles, rather than to the final versions of the author’s peer-reviewed manuscripts”. So which will be adopted? As one commentator has suggested CHORUS will work because publishers have experience setting up this kind of system, whereas SHARE does not have a good track record in this area. A cynical publisher might say: Let’s fight for CHORUS, but let’s make sure SHARE wins. Then we (the publishers) have the best of all worlds: the costs of the service will not be ours to bear, the system will work haphazardly and pose little threat to library subscriptions, and the blame will lie with others.
Implications for Knowledge Workers Though UKWs operate in a world less dictated by policy making set by governments and funding agencies, they are nevertheless affected by the results arising from implementation of political initiatives. In particular, whether Open Access becomes central to the STM publishing process, and whether it becomes Green, Gold or Hybrid dominated, will affect UKWs ability to gain easy access to published material. In that respect UKWs – would support an extension of the open access movement along with the mandates set by funders ensuring free at the point of usage. However as reflected in the political developments in both the U.K. and U.S.A., the issue of open access is far from straightforward. Tensions have been
378 Chapter 26
Political Initiatives
exposed between the library and publisher communities about ‘what is best for the scientific community’, and this has led to the U.K. and U.S.A. taking very different approaches to open access implementation for STM. UKWs should be the beneficiaries of whatever system emerges because the cost of access and some of the controls over access would be reduced and eliminated should any of the open access systems be implemented. The question is how soon can such openness be brought into effect?
Chapter 27 Summary and Conclusions Introduction The interaction of the above developments impacts those organisations active in scientific information. Their operations will be affected by the changing environment. This chapter will look at each of the main stakeholders in the current information scene with a view to seeing how robust they will be when operating under new business conditions. There are short term (operational) issues as well as longer term (strategic) policies involved.
Operational Issues Editorial Several pundits suggest there is a future role for scientific publishers despite the impact of the ‘perfect storm.’ Additional services and products could be incorporated into their current operations, such as offering services which relate to all aspects of the research life cycle – not just the dissemination of research results, but also the provision of information about data sources, funding, links to related publications, collaborators, establishing information hubs, curating laboratory notes, etc., all as part of a holistic research support environment – in fact, to cover the whole research cycle with information services, and not just the final stage of publication. Publishers could take lessons from mass market online services such as Amazon and Google, which have created services to fit in with the new information scene. In addition to extending the range and scope of ‘publishing’ as we know it new activities could be offered by publishers: welcoming users to their web sites and remembering what they last searched for; offering access to related articles which are similar to the ones last accessed. Providing easy links to third-party information services, even to competitors’ sites, could be attractive. Being much more proactive in offering an enticing environment for information users of all types – affiliated and unaffiliated. These may help break down some of the stuffiness that is a feature of most scientific publishers’ online catalogues of services. Some opportunities for scientific publishers on how they can adapt to the networked economy have been listed by Nielsen (Neilsen, 2009). These include:
380 Chapter 27
– – – – –
Summary and Conclusions
Include recommendations. (‘More like this…’ as per Amazon) Development of relevant search engines, incorporating more social features, and focused on specialist needs Tools for real-time collaboration by scientists in the work flow Providing scientific blogging and wiki platforms Facilitating the creation of webs of raw data.
Another set of opportunities revolve around extending editorial scope of STM publishing into tertiary publishing – increasing reviews and assessments of research results so that they are understandable by a broader audience of UKWs. This was encapsulated in the earlier description of Nautilus concept. In this there are a series of rings or spirals which represent differing levels of interest. Reaching out to provide media support for these far-flung editorial spirals is something which is tackled sparingly at present.
Commercial The problem could be in harnessing support across the STM publishing sector in creating a common platform to provide access to common services. The past record of STM publishing has not been strong in favouring cooperation and collaboration among competitors. Though there are associations which act on behalf of STM publishers – the International Association of STM Publishers, and the Association of Learned and Professional Society Publishers being a prime examples – these associations tend to focus on protecting their members from copyright and similar infractions rather than building new collaborative platforms for the future. There are exceptions such as CrossRef but in this case the exception proves the rule. Cross-publisher platforms which take on board the potential which the Internet and Web increasingly offer, and designed to serve a new type of audience – the ‘digital natives’ – are required. It has frequently been left to ‘outsiders’ to run with new commercial ventures which exploit new business opportunities in STM. A case in point is in document delivery where a Californian based, venture capital supported initiative offers a new commercial vision of the future for STM publishing. If the DeepDyve business model were adopted by publishers for downloading full documents (rather than just ‘renting’ the article for 24 hours as per DeepDyve currently – see chapter 16) – and a download price of, say, £2.50 were adopted, this would give publishers an additional income generated from professional knowledge workers in the U.K. alone of £275 million. It could open up a new global market for publishers of £3.5 billion. This amounts to an addition of one-third of their current journal
Marketing
381
business. Yet it is an ‘outsider’ which has identified such a commercial opportunity, and not the publishing industry itself which has traditionally been non-supportive of document delivery per se. It raises questions whether the additional market in document download sales to knowledge workers would be at the expense of the subscription/licensing model geared to research libraries within the ‘academic tent.’ Publishers could feel that this new market might cannibalise their existing sales, and prove less profitable overall. The price issue is one which Chris Anderson (Anderson, 2009b) addresses in his book ‘Free – the future of a radical price’ in which the need for an innovative approach to pricing (of products such as information) is required. This issue creates a huge ‘elephant in the room.’ Whether selling articles or subscriptions will be overtaken by authors paying relatively small amounts to have their articles published, or whether institutional repositories will become the new hubs for information dissemination. The outcome of this Open Access trend alone will determine the scale and profitability of STM publishing in years to come, but the picture is further distorted by the other ingredients of the ‘perfect storm’ described in chapter 4.
Marketing There are new markets to be explored. The current focus among publishers is towards tapping into emerging geographical markets, such as South East Asia, to compensate for static or declining sales in western countries. It is more difficult to ‘create’ new markets among the burgeoning knowledge worker sector globally hence the reliance on shifting proven marketing strategies to emerging countries. As suggested earlier, knowledge workers operate in the professions, in SMEs and many other peripheral areas. There is a difference between the needs and motivations for STM material in all these sectors of UKWs. Greater understanding of the changing needs governing the use of research results may be a prerequisite for stakeholders in STM information in future, and this understanding will require them to take seriously and on board the various economic, financial, technical, social, psychological and administrative trends. However, the significant marketing challenge is to attract citizen scientists and bring them into the habit of buying individual articles/items. UKWs offer a vast audience with a high annual growth rate but one which can only be reached through adopting appropriate business models and incentives.
382 Chapter 27
Summary and Conclusions
Products and Services Hubs, Portals and Communities The above is based largely on individuals buying specific articles of interest – the ‘article economy’ approach. An alternative pricing model revolves around delivering a multi-package service to individuals. This includes provision of a range of media – not only research articles, but also conference proceedings, moderated listserv items, datasets, software, mashups, etc. These media items would enhance the efficiency of researchers by giving support in their work flow process and not just meeting a specific information need. It is a service concept rather than just a product sale. Putting the packages together requires editorial skills beyond those found in traditional journal publishing. Instead of an editor, the service would need a gatekeeper, searching and including information items from a variety of public and private sources. It needs a person or group fully in tune with the information needs of the target group. The profile is more akin to a moderator of a specialist listserv. The profiles described by Gladwell in effecting ‘tipping points’ (the Law of the Few which embrace connectors, mavens and salesmen) give an indication of the leaders who could bring such multimedia packages onto the global STM scene (Gladwell, 2000). Current direct-to-scholar portals provided by commercial journal publishers do not meet expectations. Each portal is limited to content from just one publisher or information provider. Without interoperability, each publisher portal is an island. Only scholars covered by a site license can access them. Though portals offer opportunities for future growth it may take time for scholarly publishers to make the switch in providing the type of interlinked services needed. Publishers could unlock marketing and business-intelligence value from their existing systems. Knowledge about managing the publishing process combined with analysing online usage data from their operations could give publishers unprecedented insight into every aspect of scholars’ professional lives in education, research, and development. But this knowledge on its own is insufficient to stimulate an innovative approach to scientific communication in a digital world. However, such services, if implemented by learned societies, could give them a significant future role. They are the ones who could monitor and understand changing needs of their membership. They could segment their membership and be aware of information requirements of each segment. A pricing formula that meets the individual’s expectations, based on an annual membership fee to the learned society, may be the basis for an alternative business
Strategic Issues
383
model. There is a role for a learned society publisher to capitalise on its market knowledge, incomplete though this may be. Selective Dissemination of Information (SDI) Using computers to match a profile of interests of the user against the inflow of research outputs from different sources is another version of the above, and again relies on the adoption of new technology to reach wider audiences. With the increasing power of computing and the drive to user-centric technology, it is possible conceive of the day when the machine could automatically and instantaneously infer a taxonomy which will suit the user. Such automated systems would take advantage of machine learning to improve itself. The system would get better every time it was used. Meanwhile, the profile of interests of individual researchers – affiliated and unaffiliated – could be derived by asking individuals to fill in forms describing their research interests. It may be difficult to generate much support or compliance. However, it would also be possible to build a profile from the individual’s past search terms, and to anticipate what fields the researcher may be interested in (monitoring the ‘digital exhaust trail’ left from previous online activity). Over time the profiles may be modified as additional search terms are used, subject to privacy conditions being met. Offering suggestions, based on the search terms, can help build up the profile, as Amazon and others do. The profiles could then be matched against latest metadata from a variety of external information resources. Data from many publicly available sources from other third parties – possibly even competitor publishers – could be incorporated to create an extensive collection which could be sifted through and matched against user profiles. Whether it is called SDI, RSS, Research Alerts is beside the point – the main issue is to generate as much critical mass as possible in order to create an enticing service for as many profiled researchers as possible.
Strategic Issues For publishers, the transition to Gold Open Access is tricky. They hope to maintain their current level of revenue while replacing the income stream from site licenses with an equivalent/greater income stream from APCs. As long as sitelicense revenue remains their main source of revenue, publishers cannot take risks and experiment with new, potentially unviable business models. Nor can they afford to compete with libraries and journal aggregators, their current customers and partners. This calculation will change when Gold Open Access
384 Chapter 27
Summary and Conclusions
reaches a critical or ‘tipping point’ and a new business philosophy will be found for STM publishing. Publishers, indexing services, journal aggregators, start-ups, some nonprofit organisations, and library-system vendors all have expertise in producing compelling post-Open Access services. However, publishers only need to protect their Gold Open Access income. All others need a reasonable expectation of new revenue to invest in and develop new services. This potentially sets the stage for a significant consolidation of the scholarly-communication industry in the hands of larger publishers. Once Gold Open Access achieves traction, academic libraries may be able to engage with publishers as competitors. When site licences disappear, there is no more journal-collection development, and digital lending of journals disappears as a core service. This then requires appropriate strategic decisions from leaders in academia. With its recently released new mission statement, the Harvard Library seems to pave the way: “The Harvard Library advances scholarship and teaching by committing itself to the creation, application, preservation and dissemination of knowledge.” The future of the academic library will be implemented on these pillars. While the revised mission statement necessarily lacks specifics, it is clear what it omits – traditional collection development.
Future of Scientific Communication It is in the hands of scholars and researchers to determine the information dissemination process which will emerge, not publishers who will follow where others lead. Although researchers are currently wedded to the existing referee-based system of journal article publishing, the various ‘perfect storm’ pressures outlined in Chapter 4 suggest that other options may emerge over the next two to five years. One of these options is for the scholars to break away from the closed garden mentality which they share currently with publishers. There is elitism about STM publishing, an impression that communication should continue to be among a few specialists and not be shared with a wider community. However, the emergence of a collaborative network economy, particularly around ‘Big Science’ and citizen science, puts pressure on the singleton, elitist approach. A growing number of educated individuals outside the academic arena is leading towards a more democratic approach to the production and use of scientific research output – the ‘wisdom of the crowd.’ There is an assumption that there will be a migration from elitism to greater openness and democracy. In effecting this migration, the UKWs will be drawn into the scientific communication process, and science research and publication demographics will change.
Role of STM Publishers
385
University libraries typically define their constituencies as those scholars formally associated with their universities. Not even alumni are included, in many instances. The narrowness of their current constituencies needs emphasizing – university libraries do not serve the public, let alone UKWs. Publishers may also need to adapt, particularly with respect to new business models such as Open Access. As one pundit has mentioned on the listservs, “publishers who are battling against Open Access should have our sympathy but not our support” (Anderson, 2013b). The benefits from Open Access are too great to avoid in order just to preserve a few successful commercial publishing companies. Each of the existing stakeholders therefore faces different challenges in the future.
Role of STM Publishers In a presentation at the Academic Publishers in Europe 7th annual conference in Berlin in 2012, Derk Haank, chief executive of Springer Science and Business Media (as it then was), gave a summary of the actions that he felt publishers should take to confront the challenges and survive the perfect storm. Derk Haank is a traditional publisher, with considerable success in ensuring profitable returns on business activities. He has also shown that he is prepared to adopt innovative approaches. For example, Springer has been a pioneer among commercial publishers in supporting aspects of Open Access whilst under Haank’s watch. His take-away survival message from the conference for publishers was: – Publishers should focus on content and not on the many bells-and-whistles – Publishers must learn to live with only marginally increasing library budgets – Publishers should look to developing countries to expand their business – Publishers should not rely exclusively on technology – Instead, publishers should look at adopting more innovative business models. There are some who feel this to be a conservative, too cautious reaction to the business challenges facing scientific publishers. Some even question whether publishers will be able to survive – whether they will be able to cross the ‘valley of death’ in their migration from a print based to electronic publishing. Others claim there are opportunities for publishers providing they adapt to the new circumstances.
386 Chapter 27
Summary and Conclusions
An industry consultant, Mark Ware (then with Outsell) made the following observations about the current status of the commercial publishing industry at the same Academic Publishers in Europe conference (in Berlin, January 2012): – Copyright issues were focused on the Research Works Act (in U.S.A.) and this generated a wide scale backlash against publishers, Elsevier in particular. – Local language publishing in STM has now grown to 25% of output and needs consideration by STM publishers – of particular relevance to global UKWs – Innovation in business models is expected to occur (driven by new players such as DeepDyve, Mendeley, etc.) – New entrants are expected (particularly on a subject-basis) – Diversification from the current portfolio of products is expected, involving a change from product-centric, and emphasis on services in future. – Decision support tools offer higher growth rates of +17% compared with physical journals of +1%. – Acquisition of start-ups will occur by publishers such as Elsevier (Elsevier acquired Ariadne, Quosa, Mendeley). Macmillan bought Digital Science. PubGet acquired by CCC. – More cross platform collaboration will emerge, such as Mendeley working with Swets (although this predated Swets’ financial collapse) – There will be greater product complexity – There will be continued disruption from Open Access. The issue of ‘Is scientific publishing about to be disrupted?’ was raised by Michael Nielsen in a blog on 29 June 2009 about the future of science (Neilsen, 2009). His premise, as described in chapter 8, was that there are a number of industries which have been side-lined because they were structurally unable to cope with the new economics facing their industry sectors. He cited the print newspaper, music and mini computers. The leaders of these industries were not, he claims, either stupid or malevolent – it is because the underlying structure of their industry, primarily their scale of operations, was unsuitable for the new and emerging environment. The immune system of the newspaper, music and mini computers were protecting an organisational structure which ran counter to the openness and demands for free information which has developed on the back of the technological revolution in IT. Nielsen made the point that scientific publishing faces the same disruption. Large publishing houses will have to compete with new companies which are focused on meeting specific new digital demands with the information industry, and this poses a different set of operational requirements. He claims that in ten
Role of STM Publishers
387
to twenty years’ time “scientific publishers will be technology companies… Their foundation will be technological innovation and most key decision-makers will be people with deep technological expertise.” An example of the competition that publishers face is exemplified by a traditional publisher which reinvented itself to become a technology company in support of libraries. The example is BePress. Founded in 1999 by three professors at the University of California, Berkeley, Bepress (formerly Berkeley Electronic Press) spent its first decade building up a portfolio of peer-reviewed journals. In 2011, however, it took the surprising decision to sell all its journals and reinvented itself as a technology company. Instead of publishing journals, Bepress now licenses publishing technology – its flagship product is a cloud-based Institutional Repository/publishing platform called Digital Commons. Is this a sign of things to come: publishers becoming technology companies and librarians becoming publishers? The President and Chief Executive Officer of Bepress, Jean-Gabriel Bankier, believes it is. As he puts it, “The future of scholarly publishing now lies in the hands of libraries and scholars.” Bankier cites a U.S. study in which 55% of the universities and colleges surveyed said that they are offering or considering offering library publishing services – and not just university presses either (Bankier, 2014). What this means, he says, is that if publishers “want to continue to play a significant role in supporting the changing needs of the research community” they will need to follow the example of Bepress, and morph from content provider to a technology company. However, some claim that publishers may soon have to compete with libraries. The business case for enticing users away from library-managed portals is compelling. As funding agencies and universities enforce Open Access mandates and publishers transition their journals from the site-license model to the Gold Open Access model, libraries will cease to be the mechanism through which money streams from universities to publishers. In the Gold Open Access world, the publishers’ core business would be in developing direct relationships with scholars/ authors, not librarians. The scholars would control the purse strings. Neilsen suggests there is a flourishing ecosystem of start-ups in scientific publishing that are experimenting with new ways of communicating research, radically different in approach from journals. Some of the start-ups he described include Chemspider (recently acquired by RSC); MendeIey (a platform for filtering scientific papers backed by those involved in Last.fm and Skype, and now acquired by Elsevier); SciVee, a YouTube for scientists, etc. According to Neilsen, “Scientific publishers should be terrified that some of the world’s best scientists, people at or near their research peak, are spending hundreds of hours each year creating original research content for their own blogs, content that in many cases would be difficult or impossible to publish in a conventional journal”
388 Chapter 27
Summary and Conclusions
The content will no longer be static – value-added networked services will emerge using different multi media. As commented in a listserv by Joe Esposito, the U.S. information consultant: “Always bet on the entrepreneur, as he or she doesn’t care what gets broken in the process of making something new and effective” (Esposito, 2012a). There is much in the current dysfunctional scientific communication process which could be broken, and high profit margins is but one. Failing such a transition from a production-based publication system to one which provides new forms of added value, Neilsen is pessimistic about the future of scientific publishing as we now know it. The structural architecture which exists in STM publishing runs counter to the grass roots demands of an emerging social collaborative and social networking economy.
Warning for Publishers The growing controversy between scientists and publishers over access to scientific information has caught the attention of investors. In a briefing note on Elsevier, Claudio Aspesi of Bernstein Research warned investors that publishers might be on the verge of falling out with scientists. He wrote “We continue to be baffled by Elsevier’s perception that controlling everything (for example by severely restricting text and data mining applications) is essential to protect its economics” (Poynder, 2012b) He said that some of the commercial restrictions from publishers restricts the work of researchers. “If the academic community were to conclude that the commercial terms imposed by Elsevier are also hindering the progress of science or their ability to efficiently perform research, the risk of a further escalation in what is already an acrimonious debate would rise substantially,” he wrote. “Elsevier needs to take a much harder look at what it is doing to work well with the academic community at large, since it believes that its future lies in tapping the funding for science.” Governments and other funding bodies may then look a lot less kindly on subscription publishers if they antagonise scientists as well. If investment bankers (such as Carl Aspesi at Bernstein Research, see above) begin to doubt the strategic directions being followed by publishers, then publishers’ ability to raise funds for future investments from third parties may be compromised.
Role of the Research Library Meanwhile, there has been an explosion of different types of resources – point of care tools, online textbooks, evidence-based databases. If cost-effective use of
Role of the Research Library
389
the time, energy and talents of researchers is to occur, there is a need for a professional librarian (perhaps in another guise) to help make sense of this increasingly complex information space. Researchers may not need somebody to manage the library – they need someone to make sure that researchers have the best information available, in the right place, at the right time, in the most cost-efficient way. In a highly competitive academic and research sector, the community can ill afford to support a traditional library or librarian approach. As commented on Liblicence in late 2007 by Bernie Sloan, ..if this sort of trend continues will it gradually begin to marginalize the library, bit by bit? In other words, if more information becomes available freely, will that lead people to think they need the library less?
According to T. Scott Plutchak (Director, Lister Hill Library of the Health Sciences, University of Alabama at Birmingham), of course it will. But that has been happening piecemeal for years now. People do need the library less, but they may need the new skillsets of librarians more than ever. According to Plutchak, “One of my gripes with the Library 2.0 crowd is that they’re not radical enough. For all of the chatter about embracing change and embracing the users and becoming more participative and making use of social software and social networks the focus is still firmly on the success of ‘the library’. If we are really focused on what our communities need, we would stop talking about ‘the library’ altogether. Future activity does not take place in the library building – the ‘new’ librarians will be increasingly in the faculty, participating in curriculum meetings, teaching in the lecture halls, holding office hours in the student lounges. That is where the new librarians belong” (Plutchak, 2007). And librarians could become active in ensuring that the emerging needs of the local knowledge worker communities outside their campus are also being addressed. Will librarians be able to find a role in the migration towards Web 2.0 and the semantic web, in the development of ontologies and creation of quality metadata to enable targeted access to the world’s STM information resource? Will the body of professional training cope with this change in approach? Or will yearning for control, order and structure over vast quantities of unstructured information be the final nail in the coffin for the traditional librarian? What is the future essential role of libraries and librarians? For libraries in future it involves buildings and managing physical collections, tied up with physical space. For librarians in future it could be managing the knowledge base. This gets past the notion of their being custodians of space and physical buildings. The peer reviewed individual article is no longer dominant in many subject areas. The article has been transformed and becomes part of a network, with
390 Chapter 27
Summary and Conclusions
links to other text services and databases. The research article is often the gateway or portal into a world of simulations, data analyses, modelling, etc. Although the article has become richer in its evolution, it has become less essential as a standalone entity. New data resources are being created, organised and supported often by the research community itself rather than the librarians. The librarian needs to cope with such changes. Some librarians look to institutional repositories (IRs) as providing them with a new purpose. So far, however, librarians have been concerned that the rate of deposition in IRs has been low (without mandate), and they see IRs as at best a limited success. The IRs are good places for all the digital ‘grey literature.’ Applying metadata to such items could offer this new role for librarians, metadata which enables the grey literature to be captured by the search engines. This grey literature could then offer further competition to the research article. Wikis, blogs and social publishing will also have some impact for the librarian, the extent of which is currently unclear. It is, however, naïve not to assume that it will have some role. A survey conducted by OCLC early in 2012 among librarians from the United Kingdom, Germany and the Netherlands showed that practitioners expect library usage to change considerably. The increase in online visits that is expected by 71–85% of librarians (percentages vary by country) contrasts dramatically with their expectations of low growth in physical visits. This means that users will continue to rely on libraries for getting their information, but not necessarily by coming through the library doors. The primary reason for library use will also change in the next five years, according to 59–71% of responding librarians. With access to online databases and journals increasing in popularity as a primary reason in 2017 for ‘visits,’ the survey confirms the view that the borrowing of physical items is still the primary reason for visiting libraries today (Libraries: A Snapshot of Priorities and Perspectives’ is currently available on the OCLC website. These surveys can be found on the OCLC website at www. oclc.org/reports). So, what will be the librarian’s function given the challenges to the current modus operandum of researchers and the changing nature of the formats for information dissemination? They will become: – Stewards of the institution’s information needs. They will no longer be there just to buy or licence information products. Traditional library funds are being used in other ways. – Navigators through the information morass. – Partners with the faculty and students. Particularly involved with the authors and faculty in a much more proactive way. – Developers and implementers of new services to support the diverse constituency
Role of Funding Agencies
391
Role of Information Intermediaries The traditional roles of intermediaries as journal subscription agents and booksellers have also come under scrutiny in recent years. The number of subscription agents has been decimated over the past two decades, and instead of a flourishing group of international agents two decades ago there are now fewer than can be counted on one hand. In October 2014 it was announced that one of the few remaining global journal subscription agents – Swets – had gone into administration. Traditional academic booksellers are also under pressure as online booksellers such as Amazon take a greater share of the book budget. Aggregators and Intermediaries have therefore been faced with disintermediation as publishers sought to bypass them and gain direct access to users. In the emerging Internet environment, intermediaries are reinventing themselves based around tools such as aggregated mashups (mixing the API from different services), social bookmarks, signalling gateways, and most important of all, new search and navigation tools. Disintermediation looks the traditional intermediary square in the face. They will need flexibility to cope with different functions. Aggregation is no longer the important role it once was (for subscription agents) as interoperability and linking come to the fore. Subscription and licensing consolidation (again performed by subscription agents) will be overtaken by licensing agreements, pay-per-view using micropayments styled on the iTunes or similar models. The role of learned societies as channels for targeted information into a wider professional audience has already been alluded to. So far, they have not been that successful in the role of creating scientific product, but with the new tools available on the Internet, and the broader interest by a wider community in new informatics services and social media, there is the opportunity for their day to come.
Role of Funding Agencies A critical role is performed by funding agencies. They inject the financial resources with which research is performed. Traditionally their concern has been to see effective use of these resources in the promotion of research, but increasingly they are determining in what format research output will be delivered. In the U.K. the main funding agencies, the Research Councils, and Wellcome Trust, have come down heavily in favour of supporting Open Access, more particularly
392 Chapter 27
Summary and Conclusions
the Gold variety. On the other hand, HEFCE, which determines the grants available to universities as part of central funding, has adopted a Greener approach. A challenge which faces funding agencies is to have better information about what drives the impact of research and therefore the impact of their financial investments. At present ‘impact factors’ of various kinds have become the main tool, but these are heavily tilted towards the formal published article. The key issue is what metrics they will use in the future. For decades, the landscape of assessment has been reasonably stable with the ISI Journal Impact Factor (JIF), provided by Thomson-Reuters. With the growth of social media, Open Access mandates, and greater emphasis on articles over the journal ‘package’, new methods of assessing quality and impact have exploded on the scene. The current metrics also leave out such non-traditional outputs of research such in data sets, software, visualisation tools, or performance recordings, which are as – or even more – important than the journal article in some disciplines. The new ‘alternative metrics’ are not without their own issues. As with any new measures, different interpretations of the definitions exist, the metrics are inconsistently applied, or data from comprehensive ranges of sources are limited. NISO initiated a project in 2013 to identify issues around the new ‘altmetrics’ that could be solved with standards or best practices. There is a further problem with a top down approach in dictating how individual researchers conduct and promote their research. As outlined by Michael Polanyi (1962), there is the need to allow a decentralised approach to flourish, and not constrain science to be driven by centralised diktats. The open and more all-embracing the funding structure becomes, the more likely the spread of resources among a wider community, including UKWs, is achieved. As such, there is a responsibility for funding agencies and their advisors to ensure that they do not restrict the growth in science in flourishing areas by focusing on the traditional disciplines or maintaining established metric-based agendas. Given the volatility in ‘twigging’ of sub-disciplines, being able to ‘pick winners’ is an unenviable task. Balancing tradition and authority with originality is crucial for the health of scientific research. These are issues which need to be resolved if science communication is allowed to flourish and not be determined by traditional citation-based practices.
Role of the Disenfranchised It is surprising how unresponsive the – in many respects highly successful – publishing industry has been in its approach to the market challenges opening up
Role of the Disenfranchised
393
as print migrates into digital, and physical distribution goes online. The paradigm shift has exposed inadequacies – such as unanswered questions on how big the various market sectors are, how many journals exist, how many authors and research institutions there are at the global level, etc. How viable will the alternative business models be? In particular, the latent or ‘long tail’ has not been quantified, segmented or even considered as a commercial opportunity. Publishers have been successful in exploiting the traditional, rich low-hanging fruits in institutional libraries. Now, faced with the rich pickings no longer being available as library budgets get squeezed, publishers are refocusing their attention on exploiting emerging countries. China in particular is being courted as a country whose commitment to research and research outputs has the potential to fill some of the void which the depressed library market in the west is experiencing. So even in these difficult times, when the market is changing, publishers and other intermediaries are still ignoring the dynamics which are taking place within the core digital world. A growing cadre of highly educated people, growing all the time, creates new centres which have a demand for scientific literature, albeit not necessarily in the ‘research article’ form. They come in different sectors with different information needs – whether as professionals, as employees within SMEs, or purely to satisfy a personal need to be kept informed. These needs can no longer be overlooked by the STM publishing sector. Why is this? Perhaps publishers are seeking an easy life. Perhaps they don’t want to confront difficulties of getting to understand a diffuse and personal market sector. Perhaps they are scared of cannibalising their lucrative institutional sales by migrating to personal sales. Perhaps they don’t have the organisation, and in many cases the resources, to create an infrastructure to support selling information services to a wider community. Perhaps they don’t have the product package which the latent market needs – certainly at the price levels expected by end users and wanted by publishers. Because of their lack of market awareness and understanding, the current players are ill-equipped to create a viable commercial offering for the ‘long tail’ of scholarship. They need to consider what the professionals want, given that scientific information for them is not a top priority. They need to be made aware that SMEs are by their nature largely underfunded, and the price levels set by publishers for their existing products (research articles) are unacceptable. Individual amateur scientists are constrained by their personal budgets from buying into the current packages of scientific information services. A new approach is needed. New business models need to be developed and implemented. New information services have to be designed. Whether the existing publishing system is best suited to do this – or whether a new breed of ‘pub-
394 Chapter 27
Summary and Conclusions
lisher’ emerges from each of the research disciplines to give their members the sort of information service which suits them best, or whether learned societies will emerge with a broader remit and a wider portfolio of services – is something which only time will tell. But, at present, it does seem that publishers need to get their act together. There is a huge latent ‘long tail’ market out there waiting to be served with products and services that may well be different from those which are currently on offer. The digital economy and the Internet have changed the ground rules.
Chapter 28 Research Questions Addressed In the research aims, objectives and methology section several specific Research Questions were posed (see chapter 3). The following are the responses which now can be given to these questions. In essence the questions fell into four main areas – the structure of the industry, the extent of the concerns about the industry, the impact which social trends may be having and, finally, what role UKWs may play in the future. These can be condensed into two major themes which pervades the book. The first is that the current STM publishing process is not fit for purpose in a digital world, and what this means for Science communication in general. The second is whether the latent market of UKWs can be drawn into the scientific information system in future to make it more healthy.
The Research Questions Structure 1.
2.
What are the overall macro-level trends which are impacting on scientific communication? UK society has moved from being an agrarian, through an industrial to becoming a service-based economy during the past 3–4 centuries. Within the service economy the importance of ‘information’ has become apparent. Nearly 50% of the national product, and the employment, are now information-related according to Porat (1977) and others. Knowledge workers area a growing constituency within society as a result of the emphasis being put on higher education. Their effectiveness could be enhanced if they could be better served by an information service which embraced all knowledge workers and not just a subset within academia. What is the current structure of the information industry in the UK, specifically the research sector requiring access to scientific information? Sci/tech/med publishing is a small fish in a large information industry pond. Nevertheless it is unique in its publishing culture, having supported an ‘elitist’ system which benefits researchers working in large research institutions, whilst the rest of the research community particularly in the private
396 Chapter 28
3.
Research Questions Addressed
sector remains unaffiliated. This has led to a ‘them and us’, with the beneficiaries of the legacy culture being for the core academic market and the ‘long tail’ of the numerically large unaffiliated researchers being overlooked. What are the main external drivers for change? There is a Perfect Storm brewing which could break down those barriers preventing extended and democratic access to research results. The storm includes technological developments, social change/demography, changes in administrative procedures, and new business models being adopted. These are external drivers for change, not under the control of the STM publishing industry. As such publishers and librarians need to adapt to the emerging conditions or risk being side-lined.
Industry Concerns 4. How robust is the current scientific publishing industry in the UK. Will it adapt to address information needs of a latent knowledge worker sector? Scientific publishing is accused of being dysfunctional in that it fails to meet the needs of a new generation of users. Reliance on a print-publishing paradigm including a tight refereeing system appears ineffective in serving a communication need. There are accusations that large journal publishers are too commercial and greedy in pursuing their business activities. Finally, the journal publishing system has too many in-built delays, and research results are only formally made available months or years after announcement. In a digital, instantaneous, and free Internet environment, such delays are unacceptable. There are few indications suggesting that STM publishers are leading the migration towards a more efficient system, one which also embraces the information needs of UKWs. All these concerns raise questions about whether the scientific publishing is currently fit for purpose. 5. What are the opinions of leading industry observers concerning the main sci/tech/med publishing stakeholders? There are prominent advocates for a change in the STM journal publishing system. In terms of tipping points, these include mavens such as Harnad, Gowers, Neilsen, Esposito and many others – experts who are involved in the industry and care about its future effectiveness. The consensus by these vocal individuals is that the existing toll-based system for accessing research journals is too restrictive, access being limited to those who are
The Research Questions
397
part of large research institutions such as universities, at the expense of the rest of society. 6. What are the main information usage patterns found among researchers? The current millennium sees the STM publishing industry at the cusp of change. Not only is technology bringing new digital services to the fore in society in general, but this is leading to a change in user behaviour. In the pre-Internet world, researchers read books and journal articles; the Net Generation browse, skim and demonstrate more promiscuous search behaviour as they come to terms with information overload. There are suggestions that behavioural change is rooted in individual’s neurological adaptation to the digital world, with parts of the brain responding more rapidly to the new demands of the information environment, whereas the parts of the brain which supports in-depth reading of lengthy treatises is in decline (see Carr, 2008).
Social trends 7.
How significant are underlying sociological trends in changing research activity? Underpinning changing behaviour of researchers are several sociological concepts. These include ‘the wisdom of the mass’ which challenges traditional focus on blind refereeing; the ‘tipping point’ suggests change is created through viruses being released by individuals who are leaders and trend setters in the industry; ‘the long tail’ highlights that there is a market beyond the restricted group of researchers within academia, a market that through its aggregation of small market niches could exceed that of the current core institutional market. Other authors highlight trends to ‘collective intelligence’ and ‘digital natives’ which stress that sharing and collaboration – particularly seen in ‘Big Science’ projects – is changing the way research is being undertaken. Teamwork, sharing, collaboration are all supported by the new social networking, whereas the traditional scientific communication has been dominated by singletons and legal concerns around plagiarism. 8. How will researchers interact with social media in future in getting access to required scientific research results? Despite many advantages which the Internet world offers – speed, openness, free, collaboration – scientific publishing has so far resisted change. Researchers as authors still rely on the established quality control system to achieve recognition for their work, tenure, new funding. Only as researchers
398 Chapter 28
Research Questions Addressed
as users do they engage more with social networking and openess. The pressure to change the reward system will come from disillusioned researcher users and from funding agencies which mandate a more open information system once alternative metrics for measuring performance are in place. These would have to recognise informality of social media as important forms for communicating research results, and include them in determining funding directions. 9. What other media – other than research journals – are used to keep up to date (such as datasets, crowd sourcing, etc) Disillusioned researchers at the coalface of research activity are often designers of new information systems and brought them into being, which meet their own specific needs. These are generally bespoke to a given discipline, whilst recognising that there can be diversity in information needs even between sub-disciplines. By the same token there are new social media services developed outside STM publishing which offer better ways to ‘communicate’ (as distinct from ‘publish’) quickly. These include LinkedIn, Skype, webinars, moderated bulletin boards. Services such as blog, wikis, even FaceBook, Twitter, are also media sources which enable rapid communication and may find a place in supporting UKWs in future.
UKWs 10. Who are those not benefiting from the current system of scientific publishing? What are the main sectors within Unaffiliated Knowledge Workers? There are many unaffiliated sectors of society. Several are peripheral as far as their STM information needs are concerned, others are almost as intense as researchers in academia in their needs. The three categories which have been identified as having need for easy access to research output are those professions with a scientific requirement, small and medium enterprises (SMEs) and citizen scientists. 11. What are the main problems each of these knowledge worker sectors have in getting access to formal published research results The barriers to access are based on licensing arrangements which restrict their ability to obtain access to research publications. The business model currently used only permits users within a prescribed organisation to get access easily. New business models, based around open access movements, could feature as key determinants in creating a more equitable scientific communication system in future.
The Research Questions
399
12. What role will learned societies have in supporting access? Learned societies serve members of a number of professions, but in recent decades have often subcontracted publication of their journals to commercial publishers. The economies of scale which commercial publishers offered, both in marketing and technical spheres, were a strong inducement for the smaller, specialised learned societies to buy into. However, economies of scale are no longer as powerful in a digital/Internet world. What becomes powerful is understanding the specialised needs of the community which the learned society serves. By adopting personalised (SDI, Alerts, RSS) approaches, and developing portals, hubs and communities which are multi-media in their content, a more appropriate publishing strategy could be developed with or without external partnerships. There is a growing role for learned societies to take over some of the publishing functions currently controlled by commercial publishers. It requires extending beyond their comfort areas into more ITC supported arenas, but as costs of participation in the latter tumble the scale of investments required become more acceptable. 13. How will open access facilitate greater democratisation within scientific information? In tandem with the changing the STM publishing profile there is growing support for open access as a replacement for a subscription-based business model. In principle, open access in either its Green or Gold formats, or possibly even a more collaborative Grey network, could enable anyone to access research publications. However, it is a matter of focus and priority. Open Access publishers are currently schooled in the ways of the commercial publishing paradigm, and concentrate on providing easy and free access to the core academic market and are not proactive in stimulating demand from the various UKW sectors. There is lack of market research and marketing strategies to support their activities. So they protect their current profitable operations through perpetuating licensing models. As a tool for easy access and democratisation of the scientific information process, open access is essential. It has raised its banner at a time when the ‘long tail’ has highlighted new market opportunities. These market opportunities require openness, interactivity and free conditions which come with Open Access. 14. What is the impact on stakeholders in providing UKW researchers’ information needs? All stakeholders will be affected by the ‘perfect storm’ of external changes, particularly larger commercial journal publishers. Librarians may see a greater role for themselves in running institutional repositories, at the expense of physical collection development. Funders may change the way
400 Chapter 28
Research Questions Addressed
STM is disseminated but only after suitable metrics are in place to ensure funded projects deliver quality research. 15. What needs to be done to enfranchise UKWs in the UK in future? A different business model is required. It would incorporate Green/Gold open access, it would include learned societies as key levers into the professionals. It would embrace a series of different media types, not just journals. Portals and hubs which bring together formal and informal information types, and targets them using SDI against a targeted audience, would offer a counterbalance to the ‘eat all you can’ from services such as Google. The need is for agencies to proactively work with representatives from subject areas to ensure the full range of information services are available to meet developing market needs. Moderators could replace editors as leaders in the information chain.
Moving Forward There are a number of strategic issues which emerge from the above analysis. Industry Dysfunctionality –
–
– –
– –
Support national policies in the adoption of ‘Open Access’ (Open Access). This includes both Green and Gold. Also investigate Grey Open Access options which include informative abstracts (Allington, 2013) being attached to the item. Actions need to be taken to avoid Hybrid Open Access opening the doors to ‘double dipping.’ Standards and procedures to be put in place by information providers to prevent dual payment for same item. Also, put a stop to abuse by so-called Gold publishers who renege on publishing articles even after receiving (APC) payments (see Beall’s predatory list). Pursue investigations into regional and national licensing of STM material to include the core STM market and UKWs Alternative quality control (refereeing) systems in addition to double blind refereeing to be assessed, capitalising on social networking, transparency, speed and the wisdom of crowds. Support for copyright protection services such as Creative Commons which are less restrictive on the dissemination of research output. Innovative services in specific research areas, developed by subject experts, to be supported. Includes projects such as Mendeley, Frontiers, ResearchGate, etc.
Moving Forward
–
– – –
401
Experiments to be supported in alternative, multimedia STM information products and services. These include ‘personalisation and customisation’ of information delivery through SDI equivalent processes. Also support for developing subject based hubs and portals incorporating different information/media sources, and links to third party resources Support for initiatives such as DeepDyve and payment mechanisms such as iTunes, PayPal, etc., which give different options to pay for individual items Greater stakeholder cooperation on providing strategies for a fully digital STM information systems.
UKW Focused –
– – –
–
–
Projects in support of STM-focused learned societies. Capitalising on the latter’s network, membership and understanding of their member’s needs, the aim would be to explore innovative approaches to meeting these UKW needs. Explore research of the learned society member’s current activities in keeping up with new developments (sharing investment in market research projects) Undertake more demographic research studies to quantify those sectors where regular scientific information input would prove useful Investigate business potential for more tertiary publications which give a layman’s description of latest, pertinent developments across a number of key subject areas (building on the Nautilus concept and Nature/Science magazine successes) Assess new business models which allow end users (UKWs) to access wanted information at minimal unit cost, relying on the ‘long tail’ to create a viable commercial operation Extend on the template for UKW research in the U.K. to other countries and continents.
Moving forward on several of the above could hasten the day when the elitism and dysfunctionality within STM publishing is eradicated and UKWs are given equal access to the same information resources as those currently hidden behind academic garden walls.
Enfranchising UKWs in the U.K. There has been a switch in the business paradigm which affects scientific publishing and the expectation is that this will become more dramatic in years to
402 Chapter 28
Research Questions Addressed
come. The Internet and Web, supported by other changes in technology, social interaction and administrative arrangements, has created a watershed between the old, traditional, subscription-based way of doing business, from the open, interactive and collaborative approach being adopted by the Net generation. As Clay Shirky (2008) has pointed out, two decades ago supply of published information created its own demand. It resulted in a take it or leave it approach. Now demand is creating its own supply, which means user needs are driving the creation of product. Scarcity is no longer an issue is an era of massive digitisation, data compilation and filtering. This heralds a new approach to the publication of research output. Whilst the main stakeholders argue over the merits or otherwise of promoting ‘free access’ to research output, the equally challenging need to provide end users with what they need in a format which is wanted, at a price which is acceptable, within an overall context that enables all participating stakeholders to achieve a reasonable and sustainable financial return, has become paramount. The various business models which exist have been described in chapter 24. One which stands out as being appropriate comes from a different industry sector. It is the use of the trail which end users leave in their search and discovery process as an indicator of what they might want or need from research results just being completed or about to be completed. It is the approach which Amazon has made in alerting book readers as to what else they may find interesting based on their purchasing history. The profile of interest is a key aspect – building up a picture of what a researcher may find indispensable from the output of colleagues and peers – and then using this profile to match against all incoming articles, commentaries, reviews, data streams, videos, software, etc., based on quality metadata which describes such content. All from a hub or portal administered by agents close to the scientific discipline – who are mavens aware of digital end users’ needs. Once such a personalised system is introduced, establishing a pricing formula that would allow end users to buy research output in an acceptable way would then be necessary. We can look at experiences in other information industries to see how this could be implemented. Apple is an organisation which has pioneered the use of micropayments as the way demand and supply can be linked. Micropayments would, if set at an ‘acceptable’ level, not only attract the hard core researchers in academia or corporate R&D world to buy into the system, but more importantly, enable the UKWs to become purchasers of granulated research output. The ‘long tail’ of scientific research would be captured, enabling a greater cake to be made available for information providers to compete over. Personalisation and customisation of information, and using micropayments, would be one way to meet the objectives of an information society where
Moving Forward
403
abundance takes over from scarcity, and filtering of information in anticipation of demand becomes a goal. Ensuring that market segmentation can be used to ensure that only those who would want to access to relevant items leads to efficiency. What stands in the way is the current practice of publishers locking information behind closed doors and demanding high prices for access. There is a bigger market out there which can only be reached through a more open and less expensive set of publishing strategies. Anderson (2009b) has made a strong case for basic information (the research article) to be made available for free, with commercial returns coming from the implementation of premium services on top of the article resources. If such radical paradigms were introduced, the institutional structures of publishers, libraries and intermediaries would all give way to a service-focused set of organisations, some more ‘virtual’ than others, but few requiring the huge commitments to buildings or staffing levels that existed in the past. As Information Technology becomes more intricately interwoven into the information gathering habits of knowledge workers, so the challenges arise for the traditional stakeholders to adapt their operations, delivery mechanisms and business models. Publishers will no longer have to employ armies of desk editors; library staff could be reduced in line with a reduction in some of the services they support, and intermediaries change their role and become more digital in their support activities. Traditional players will need to take on new business roles or they will disappear under the flood of new innovative operations. Future publications will also change. Even now, the digital potential of data based media publications has not been exploited. Currently, digital publications copy features of printed publications. However, new business models that exploit the potential of digital media and also enable a wider audience of knowledge workers to be drawn in are urgently required. The new products and services could exploit their potential to be part of a network rather than copy the traditional practice of being part of a chain. The future is challenging. It is also exciting, particular for those with the imagination and energy to run with the new potential which is opening up. Electronic publishing for STM is finding a new role in the new millennium.
Bibliography Ackoff, R. 1989. “From Data to Wisdom.” Journal of Applied Systems Analysis, 16: 3–9. Adams, J., C. King, and N. Ma. 2009. Global Research Report – China. Research and Collaboration in the New Geography of Science. Thomson Reuters. Allington, D. 2013. “On Open Access, and Why It’s Not the Answer.” Personal blog dated October 15, 2013. Accessed May 25, 2015. http://www.danielallington.net/2013/10/openaccess-why-not-answer/#sthash.aQfG6d1 A.dpbs dot Anderson, C. 2004. “The Long Tail.” Wired, 12 (10). Accessed May 25, 2015. http://archive.wir ed.com/wired/archive/12.10/tail.html dot Anderson, C. 2008a. “Free! Why $0.00 Is the Future of Business.” Wired, 16 (3). Accessed May 25, 2015. http://archive.wired.com/techbiz/it/magazine/16-03/ff_free?currentPage=all dot Anderson, C. 2009a. The Longer Long Tail – How Endless Choice Is Creating Unlimited Demand. London, UK: Random House Business Books. Anderson, C. 2009b. Free– The Future of a Radical Price. London, UK: Random House Business Books. Anderson, I. 2008. “The Audacity of SCOAP3.” ARL: A Bimonthly Report on Research Library Issues and Actions from ARL, CNI, and SPARC, 257, April 12–13, 2008. Accessed May 26, 2015. http://www.arl.org/resources/pubs/br/br257.shtml dot Anderson, K. 2011. “Uninformed, Unhinged, and Unfair – The Monbiot Rant.” Scholarly Kitchen, September 1, 2011. Accessed April 2014. http://scholarlykitchen.sspnet.org/2011/09/01/ uninformed-unhinged-and-unfair-the-monbiot-rant/ dot Anderson, P.W. 1972. “More is Different.” Science. New Series. 177 (4047). August 1972. 393–396. Anderson, R. 2011. “OA Rhetoric, Economics, and the Definition of ‘Research.’” Scholarly Kitchen, September 7, 2011. Accessed October 16, 2013. http://scholarlykitchen.sspne t.org/2011/09/07/oa-rhetoric-economics-and-the-definition-of-research/ dot Anderson, R. 2013a. “Signal Distortion – Why the Scholarly Communication Economy Is So Weird.” Scholarly Kitchen, May 14, 2013. Accessed October 16, 2013. http://scholarlykitchen.sspne t.org/2013/05/14/signal-distortion-why-the-scholarly-communication-economy-is-so-weird/ dot Anderson, R. 2013b. “On the Likelihood of Academia” Taking Back” Scholarly Publishing.” Scholarly Kitchen, June 27, 2013. Accessed October 16, 2013. http://scholarlykitchen.ssp net.org/2013/06/27/on-the-likelihood-of-academia-taking-back-scholarly-publishing/ dot Appleyard, A. 2010. “British Library Document Supply – a Fork in the Road.” Interlending and Document Supply, 38 (1): 12–16. Aspesi, C. 2012. “Reed Elsevier: A Short History of Two Days in July (and Why Investors Should Care).” (see Poynder, 2012). Association of Research Libraries (ARL). 2008. Current Models of Digital Scientific Communication – Results of an Investigation Conducted by Ithica for the Association of Research Libraries. Washington, DC: Association of Research Libraries. Association of Research Libraries (ARL). 2009. Statistics 2008–09. Washington, DC: Association of Research Libraries. Baldwin, C. 2004. What Do Societies Do with Their Publishing Surpluses? Oxford: Blackwell. Accessed October 16, 2013. http://www.alpsp.org/Ebusiness/ProductCatalog/nfpSurplu ses.aspx?ID=48 dot Bankier, J-G. See Poynder 2014.
Bibliography
405
Bartling, S., and S. Friesike, ed. 2014. Opening Science – The Evolving Guide on How the Web is Changing Research, Collaboration and Scholarly Publishing. Accessed October 16, 2014. http://book.openingscience.org/ dot Battelle, J.L. 2005. The Search: How Google and Its Rivals Rewrote the Rules of Business and Transformed Our Culture. New York: Portfolio. Beckett, C., and S. Inger. 2007. Self Archiving and Journal Subscriptions: Coexistence or Competition? An International Survey of Librarians’ Preferences. London: Scientific Information Strategies Ltd for Publishing Research Consortium. Beise, M., and H Stahl. 1999. “Public Research and Industrial Innovations in Germany.” Research Policy, 28 (4): 397–422. Bell, A. 2012. “Wider Open Spaces: Freely Accessed Papers Are Simply Points In a Sonstellation of Scientific Communication with the Public, says Alice Bell.” Times Higher Education, April 19, 2012. Bell, D. 1973. The Coming of Post-Industrial Society: A Venture in Social Forecasting. New York, USA: Basic Books. Bergstrom, C.T., and T.C. Bergstrom. 2004. “The Costs and Benefits of Library Site Licenses to Academic Journals.” PNAS, 101 (3). January 20, 2004. University of Washington, Seattle. http://www.pnas.org/content/101/3/897.full.pdf+html dot Björk, B-C. 2012a. “Open Access Versus Subscription Journals – A Comparison of Scientific Impact.” BMC Medicine, 10: 73. Accessed August 2012. http://www.biomedcentral.com/ 1741-7015/10/73 dot Björk, B-C. 2012b. “The Hybrid Model for Open Access Publication of Scientific Articles – A Failed Experiment?” Journal of the American Society of Information Sciences, 63 (8): 1496–1504. Björk, B-C., A. Roos, and M. Lauri. 2009. “Scientific Journal Publishing: Yearly Volume and Open Access Availability.” Information Research, 14 (1) paper 391. Accessed October 16, 2013. http://InformationR.net/ir/14-1/paper391.html dot Bohannon, J. 2014. “Secret Bundles of Profit.” Science Magazine, 344 (6190): 1332–1333. Accessed July 1, 2014. DOI:10.1126/science.344.6190.1332 dot Brand, S. 1984. “Information Wants to Be Free. Information Wants to Be Expensive.” Whole Earth Review, May 1985: 49. Brienza, C. 2011. “Communication or Credentialing? On the Value of Academic Publishing.” Impact of Social Sciences, 5 May. Accessed 16 October 2013. http://blogs.lse.ac.uk/impac tofsocialsciences/2011/05/05/communication-or-credentialing/ dot Brighton, R. 2008. Open Access to Research Outputs, Final Report to RCUK (Research Councils UK). LISU (Loughborough University) and SQW Consulting (Cambridge). Brooks, D. 2012. The Social Animal: A Story of How Success Happens. 2nd edition. London, UK: Short Books. Brown, A. 2009. “Digital Britain Needs Access to Science Journals, not YouTube.” The Guardian, February 5, 2009. Brown, D. 2003. “Is This the End of the Article Economy? A Strategic Review of Document Delivery.” Interlending and Document Supply, 31 (4): 253–263. Brynjolfsson, E., Y.J. Hu, and D. Simester. 2007. Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales. Social Science Research Network. Accessed November 2012. http://ssrn.com/abstract=953587 dot Cambridge Economic Policy Associates. 2008. Activities, Costs and Funding Flows in the Scholarly Communications System in the U.K. London, UK: Research Information Network.
406 Bibliography
Accessed May 26, 2015. http://www.rin.ac.uk/our-work/communicating-and-disseminat ing-research/activities-costs-and-funding-flows-scholarly-commu dot Capgemini UK. 2008. The Information Opportunity. London: Capgemini. Carpenter, J., L. Wetheridge, and N. Smith. 2010. Researchers of Tomorrow: Annual Report 2009–10. British Library/JISC, Accessed April 2014. http://explorationforchange.net/ attachments/056_RoT%20Year%201%20report%20final%20100622.pdf dot Carr, N. 2008. “Is Google Making Us Stupid? What the Internet is doing to our brains.” Atlantic Magazine, July/August 2008. Accessed April 2014. http://www.theatlantic.com/doc/200807/ google. Carr, N. 2010. The Shallows – How the Internet Is Changing the Way We Think, Read and Remember. London and New York: W.W. Norton and Co. Castells, M. 1996. The Information Age; Economy, Society and Culture. Volume 1, The Rise of the Networked Society. Oxford: Blackwell Publishers. CBD. 2009. CBD Directory of British Associations and Associations in Ireland. 2009. 19th edition. Beckenham, Kent: CBD Research Ltd. Center for Studies in Higher Education (CSHE). 2011. Final Report – Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines. University of California, Berkeley. Accessed May 25, 2015. http://www.cshe. berkeley.edu/publications/final-report-assessing-future-landscape-scholarly-communica tion-exploration-faculty dot CEPA LLP and M. Ware Consulting. 2011. Heading for the Open Road: Costs and Benefits of Transitions in Scholarly Communication. Accessed May 26, 2015. http://www.rin.ac.uk/ourwork/communicating-and-disseminating-research/heading-open-road-costs-and-bene fits-transitions-s dot Chen, C. 2010. “Information visualization.” WIREs Comp Stat, 2: 387–403. doi: 10.1002/ wics.89 dot CIBER. 2005. “New Journal Publishing Models – An International Survey of Senior Researchers” CIBER, City University. London dot CIBER. 2009. The Economic Downturn and Libraries – A Global Survey of the World’s Libraries in Challenging Times. Charleston Observatory, 2009. Accessed May 26, 2015. http:// ciber-research.eu/download/20091103-charleston-survey.pdf dot CIBER. 2010. “Research Support Services: What Services Do Researchers need and use?” CIBER, November 2010. Commissioned by RIN and OCLC dot CIBER. 2011. Are Social Media Impacting on Research? Charleston SC: Charleston Observatory. Accessed April 2014. http://www.ucl.ac.uk/infostudies/research/ciber/ dot Coase, R. 1937. “The Nature of the Firm.” Economica, New Series, 4 (16): 386–405. Corbyn, Z. 2009. “A Threat to Scientific Communication.” Times Higher Education Supplement, August 13, 2009. Cox, J., and L. Cox. 2008. Scientific Publishing Practice – Academic Journal Publishers’ Policies and Practices in Online Scientific Publishing: Third Survey. For Association of Learned and Professional Society Publishers (ALPSP), Shoreham-by-Sea, West Sussex, U.K. Cox, J., and L. Cox. 2010. “One Year On: Evaluating the Initial Impact of the Scottish Higher Education Digital Library (SHEDL).” London: RIN. Davenport, T.H. 2011. “Rethinking Knowledge Work: A Strategic Approach.” McKinsey Quarterly. Accessed May 25, 2015. http://www.mckinsey.com/insights/organization/rethin king_knowledge_work_a_strategic_approach dot
Bibliography
407
Davis, P.M. 2014. “Are Scientists Reading Less? Apparently Scientists Didn’t Read This Paper.” The Scholarly Journal, February 7. Accessed May 25, 2015. http://scholarlykitch en.sspnet.org/2014/02/07/are-scientists-reading-less-apparently-scientists-didnt-read-thispaper/ dot Dean, D., and C. Webb. 2011. “Recovering from Information Overload.” McKinsey Quarterly, January 2011. Accessed May 25, 2015. http://www.mckinsey.com/insights/organization/ recovering_from_information_overload dot Denmark’s Electronic Research Library (DEFF). 2011. Access to Research and Technical Information in Denmark. Denmark: Forsknings- og Innovationsstyreisen. Dewatripont, M., V. Ginsburgh, P. Legros, and A. Walckiers. 2006. “Study on the Economic and Technical Evolution of the Scientific Publication Markets in Europe.” European Commission (Research Directorate-General). Accessed May 25, 2015. http://ec.europa.eu/research/ science-society/pdf/scientific-publication-study_en.pdf dot Drucker, P.F. 1959. Landmarks of Tomorrow. New York: Harper and Row. Drucker, P.F. 1973. Management: Tasks, Responsibilities’, Practices. New York: Harper and Row. Duggan, M., and J. Brenner. 2013. The Demographics of Social Media Users. Pew Research Center’s Internet & American Life Project. Accessed February 14, 2013. http://www.pewre search.org dot Duggan, M., and A. Smith. 2013. Social Media Update. Pew Research Center’s Internet & American Life Project. Accessed December 2013. http://www.pewresearch.org dot Dunbar, R.I.M. 1992. “Neocortex Size as a Constraint on Group Size in Primates.” Journal of Human Evolution, 22 (6): 469–493. Dutton, W., and E.J. Helsper. 2007. The Internet in Britain: 2007. Oxford, UK: University of Oxford, Oxford Internet Institute. Eagleman, D. 2011. Incognito: The Secret Lives of the Brain. Edinburgh: Canongate Books. Economist. 2013. “Scientific Publishing Has Changed the World, Now It Needs to Change Itself.” Economist. October 19, 2013. Ernst and Young. 2012. University of the Future – A Thousand Year Old Industry on the Cusp of Profound Change Ernst and Young (Australia). Esposito, J. 2007. “Open Access 2.0: The Nautilus: Where – and How – OA Will Actually Work.” The Scientist, 21 (11): 52. Esposito, J. 2012a. “Putting Society Publishing in Context.” The Scholarly Kitchen, July 16, 2012. Esposito, J. 2013. “Open Access 2.0: Access to Scholarly Publications Moves to a New Phase.” Scholarly Kitchen, February 2013. European Commission. 2007a. Green Paper – The European Research Area: New Perspectives. EC, Luxembourg: COM (2007): 161. European Commission. 2007b. “Communication on Scientific Information in the Digital Age: Access, Dissemination and Preservation.” COM (2007): 56. Accessed December 2009. http://ec.europa.eu/research/conferences/2009/era2009/speakers/papers/paper_al ma_swan.pdf dot Eurostat. 2007. “Highly Qualified Workers in Science and Technology. Statistics in Focus”, Eurostat. 103/2007. Evans, P., and T. Wurster. 2000. Blown to Bits: How the New Economics of Information Transforms Strategy. Cambridge, MA, USA: Harvard Business School Press. Faxon Institute. 1991/2. Examination of Work-related Information Acquisition and Usage among Scientific, Technical and Medical Fields. Boston, USA: The Faxon Institute. Grey Literature.
408 Bibliography
Finch Report. 2012. See RIN 2012. Fox, S. 2008. “Degrees of Access.” Pew Research Center, Internet, Science and Tech. February 21, 2008. Washington, DC: USA. Frantsvåg, J.E. 2010. “The Size Distribution of Open Access Publishers: A Problem for Open Access.” First Monday, 15 (12). Accessed December 2011. http://firstmonday.org/ojs/ index.php/fm/article/view/3208/2726 dot Frass, W., J. Cross, and V. Gardner. 2013. “Open Access Survey: Exploring the Views of Taylor & Francis and Routledge Authors.” Taylor & Francis/Routledge, March 2013. Accessed May 26, 2015. http://www.tandf.co.uk/journals/pdf/open-access-survey-march2013.pdf dot Friend, F.J. 2007. “UK access to UK research.” Serials, 20 (3): 231–234. Friend, F.J. 2013. Lib-License-L. November 7, 2013. Gardner T., and S. Inger. 2012. “How Readers Navigate to Scholarly Content. Comparing the Changing User Behaviour between 2005 and 2012 and Its Impact on Publisher Web Site Design and Function.” Accessed May 25, 2015. http://www.renewtraining.com/How-ReadersDiscover-Content-in-Scholarly-Journals-summary-edition.pdf dot Gartner. 2012. April Forecast: Media Tablets by Operating System Worldwide 2010–2016: 2012 Update. Gartner. Accessed April 2014. http://my.gartner.com/portal/server.pt?open=512&ob jID=260&mode=2&PageID=3460702&resId=1952715&ref=QuickSearch&sthkw=milanesi dot Gartner. 2014. Hype Cycle Methodologies. Gartner. http://www.gartner.com/technology/research/ methodologies/hype-cycle.jsp dot Gilder, G. 1981. Wealth and Poverty. ICS Press. Gladwell, M. 2000. The Tipping Point – How Little Things Can Make a Big Difference. Boston, MA: Little Brown and Co. Gladwell, M. 2008. Outliers: The story of success. Boston, MA, USA: Little Brown & Co. November 2008. Gowers, T.W. 2009. “Is Massively Collaborative Mathematics Possible?” Gowers’s Weblog, January 27, 2009. Accessed April 2014. http://gowers.wordpress.com/2009/01/27/is-mas sively-collaborative-mathematics-possible/ dot Gowers, T.W. 2012. The Sunday Times, February 19, 2012. Gowers, T.W. 2014. “Elsevier Journals – Some Facts.” Gowers’s Weblog. https://gowers.word press.com/2014/04/24/elsevier-journals-some-facts/ dot Greenfield, S.A. 2005. “Biotechnology, the Brain, and the Future.” Trends in Biotechnology, 23 (1): 34–41. Grover, D. 2011. Posted on [email protected]. Electronic Resources Coordinator, University of Washington Libraries. Guedon, J-C. 2004. “The ‘Green’ and ‘Gold’ Roads to Open Access: The Case for mixing and Matching.” Serials Review, 30 (4): 315. Haank, D. 2011. See Poynder. 2011a. Hall, M.B. 2002. Henry Oldenburg: Shaping the Royal Society, Oxford, UK: Oxford University Press. Hank, C. 2012. “The Scholar Blogs or Today, Tomorrow: Practices and Perceptions of Value, Impact and Stewardship.” Presentation slides from a talk given at ICPSR (University of Michigan): August 14, 2012. Hardin, G. 1968. “The Tragedy of the Commons.” Science, 162 (3859) December 13, 1968, 1243–1249.
Bibliography
409
Hargreaves, I. 2011. “Digital Opportunity – a Review of Intellectual Property and Growth.” Ian Hargreaves, May 2011. https://www.gov.uk/government/uploads/system/uploads/ attachment_data/file/32563/ipreview-finalreport.pdf dot Harnad, S. 1994. “Scientific Journals at the Crossroads: A Subversive Proposal for Electronic Publishing.” In Publicly Retrievable FTP Archives for Esoteric Science and Scholarship: A Subversive Proposal, edited by Ann Okerson & James O’Donnell. Washington, DC: Association of Research Libraries, 11–13. Harris, S. 2012. “Moving Towards an Open Access Future.” SAGE Publications. Accessed August 2012. http://www.sagepublications.com dot Harvard University. 2012. “Major Periodical Subscriptions Cannot Be Sustained.” Faculty Advisory Council Memorandum on Journal Pricing. http://isites.harvard.edu/icb/icb.do?key word=k77982 & tabgroupid=icb.tabgroup143448 dot Henry, S. Gilmore A. Gallagher D. 2007. “E-marketing and SMEs: operational lessons for the future.” European Business Review, Vol. 19 Iss: 3 pp. 234–247. Higher Education Funding Council England (HEFCE). 2014. “Policy for Open Access in the post2014 Research Excellence Framework.” March 2014. http://www.hefce.ac.uk/pubs/year/ 2014/201407/ dot Higher Education Policy Institute (HEPI). 2013. Autumn Conference on the Future for UK Research Policy & Funding, Higher Education Policy Institute at Royal Society, November 14, 2013. Higher Education Statistics Agency (HESA). 2010. Destinations of Leavers from Higher Education Institutions 2008/09 and 2019/10. Higher Education Statistics Agency, June 2010. Higher Education Statistics Agency (HESA). 2013. Higher Education – Statistics for the United Kingdom 2011/12. HESA. June 2013. Higher Education Statistics Agency (HESA). 2014. Higher Education – Statistics for the United Kingdom 2011/12. HESA. June 2014. Horrigan, J.B. 2006. “The Internet As a Resource for News and Information about Science.” Pew Internet and Life Project. Washington, DC. Accessed August 2012. http://www.pewInternet.org dot Houghton, J. 2009. Benefits of Open Access Clearly Outweigh Costs in Three European Countries. Knowledge Exchange. Accessed December 2009. http://www.knowledge-exchange.info/ Default.aspx?ID=316 dot Houghton, J., B. Rasmussen, S. Sheehan, C. Oppenheim, A. Morris, C. Creaser, H. Greenwood, M. Summers, and A. Gourlay. 2009b. Economic Implications of Alternative Scientific Publishing Models: Exploring the Costs and Benefits. CSES and Loughborough University, January 2009. U.K.: Jisc. Accessed December 2009. http://www.Jisc.ac.uk/publications/ documents/economicpublishingmodelsfinalreport.aspx dot Houghton, J., and P. Sheehan. 2006. The Economic Impact of Enhanced Access to Research Findings. Melbourne: Centre for Strategic Economic Studies, Victoria University. Houghton J., A. Swan, and S. Brown. 2011. Access to Research and Technical Information in Denmark. Forsknings-og Innovationsstyrelsen. House of Commons Science and Technology Committee. 2004. Scientific Publications: Free for All? Tenth Report of Session, 2003-04. Howe, J. 2006. “The Rise of Crowdsourcing.” Wired, 14 (6): 1–4. International Data Corporation (IDC). 2009. Hidden Costs of Information Work: A Progress Report. (IDC #217936). International Council of Scientific and Technical Information (ICSTI). 2011. Annual Conference. Beijing, China: June 2011. Internet Statistics. 2014. Internet World Statistics at http://www.Internetworldstats.com dot
410 Bibliography
Jeffrey, K.G. 2012. GOAL listserv, June 21, 2012. Joint Information Services Committee (Jisc). 2009a. JISC 2009 Roadmap. Accessed December 2009. http://www.Jisc.ac.uk/fundingopportunities/futurecalls.aspx dot Joint Information Services Committee (Jisc). 2009b Repositories Support Project Briefing Papers. Centre for Research Communications, University of Nottingham. Accessed December 2009. http://www.Jisc.ac.uk/whatwedo/programmes/reppres/repsupport.aspx dot Joint Information Services Committee (Jisc). 2012. “Value and Benefits of Text Mining.” Diane MacDonald, and Ursula Kelly. Viewforth Consulting, 2012. Kahneman, D., and A. Tversky. 1979. “Prospect Theory: An Analysis of Decision under Risk.” Econometrica, 47 (2): 263. Kanter, R.M. 2003. “Leadership and the Psychology of Turnarounds.” Harvard Business Review, June 2003. Keen, A. 2007. The Cult of the Amateur: How Today’s Internet is Killing our Culture. New York, USA: Doubleday Publishing/Random House. King, C. 2012. “Multiauthor Papers: Onwards and Upwards.” Science Watch, Thomson Reuters, July 2012. Knovel. 2013. “Engineers Searched Online for Equations More Frequently in 2013, But Were Dissatisfied with Results.” Knovel, NY. Accessed December 2013. http://why.knovel.com/ news-a-events/2979-engineers-searched-online-for-equations-more-frequently-in-2013but-were-dissatisfied-with-results-.html dot Krotoski, A. 2010. “Democratic, but Dangerous Too: How the Web Changed Our World.” The Observer, January 24, 2010. Krugman, P. 2013. “The Facebooking of Economics.” New York Times. December 17, 2013. Lenhart, A., M. Madden, A. Smith, and A. Macgill. 2007. “Teens and Social Media - The use of social media gains a greater foothold in teen life as they embrace the conversational nature of interactive online media.” Pew Internet and American Project, Washington, DC: USA. December 19, 2007. Lanier, J. 2010. You are not a Gadget – A Manifesto. Toronto: Random House. League of European Research Libraries (LERU). 2011. “The LERU Roadmap Towards Open Access.” LERU Open Access Working Group. Lieberman, M.D. (2013). Social: Why Our Brains Are Wired to Connect. New York, NY: Crown. Linacre, C. 2009. “Survey of Library Services in UK Professional Bodies.” Grey literature. 2013. Lloyd, W.F. 1833. Two Lectures on the Checks to Population. Oxford: The author. Accessed May 25, 2015. https://ia801408.us.archive.org/11/items/twolecturesonch00lloygoog/twolecture sonch00lloygoog.pdf dot Mabe, M.A. 2003. “The Growth and Number of Journals.” Serials, 16 (2): 191–197. Mabe, M.A. 2009. “Scientific Publishing.” European Review, 17 (1): 3–22. Mabe, M.A., and M. Amin. 2001. “Growth Dynamics of Scholarly Journals.” Scientometrics, 51 (1), 147–162. Mabe, M.A., and M. Amin. 2002. “Dr Jekyll and Dr Hyde: Author-Reader Asymmetries in Scholarly Publishing.” Aslib Proceedings, 54: 149–175. Machlup, F. 1962. The Production and Distribution of Knowledge in the United States. Princeton, NJ: Princeton University Press. Madden, M. 2005. Internet Impact 101. Pew Research. Accessed May 26, 2015. http://www.pe winternet.org/2005/06/08/internet-impact-101/ dot
Bibliography
411
Maguire, E.A., D.G. Gadian, I.S. Johnsrude, C.D. Good, J. Ashburner, R.S.J. Frackowiak, and C.D. Frith. 2000. “Navigation-Related Structural Change in the Hippocampi of Taxi Drivers.” PNAS, 97 (8): 4398–4403. Mandavilli, A. 2011. “Trial by Twitter.” Nature, 469 (January 20): 286–287. Mansfield, E. 1991. “Academic Research and Industrial Innovation”. University of Illinois at Urbana-Champaign. McCabe, M.J. 2004. “Information Goods and Endogenous Pricing Strategies: The Case of Academic Journals.” Economics Bulletin, 12 (10), 1–11. McDonald, D. 2012. Report into the Value and Benefits of Text Mining to UK Further and Higher Education. Jisc Report / Doc# 811 / Version 1.1 / dot McGuigan, G., and R. Russell. 2008. “The Business of Academic Publishing: A Strategic Analysis of the Academic Journal Publishing Industry and Its Impact on the Future of Scientific Publishing.” E-JASL: The Electronic Journal of the Academic and Special Librarianship, 9 (3). Accessed April 2014. http://southernlibrarianship.icaap.org/content/v09n03/mcgui gan_g01.html dot McLuhan, M. 1967. Media in the Message: An Inventory of Effects. Random House/Ginko Press. Mendeley. 2012. Global Research Report. Mendeley. Accessed November 2012. http://mnd.ly/ global-research-report. Merton, R.K. 1968. “The Matthew Effect in Science.” Science, 159 (3810): 56–63. Monbiot, G. 2011. “Academic Publishers Make Murdoch Look Like a Socialist: Academic Publishers Charge Vast Fees to Access Research Paid for by Us. Down with the Knowledge Monopoly Racketeers.” And “The Lairds of Learning – How Did Publishers Acquire These Feudal Powers?” Guardian, August 29, 2011. Accessed October 16, 2013. http://www.the guardian.com/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist. Moore, G.E. 1965. “Cramming More Components onto Integrated Circuits.” Electronics Magazine, 38 (8): 114–117. Morris, S. 2009. “Journal Authors’ Rights: Perception and Reality, Summary Paper 5.” Publishing Research Consortium. Accessed December 2009. http://www.publishingresearch.net/ documents/JournalAuthorsRights.pdf dot Morville, P. 2011. Ambient Findability – What We Find Changes Who We Become. Sebastopol, CA: O’Reilly Media. Mumford, L. 1974. “Enough energy for life and the next transformation of Man.” MIT lecture transcript. CoEvolution Quarterly. (Sausalito, CA: POINT Foundation) 1 (4): 19–23. Murray Rust, P. 2012. GOAL Listserv. 10 August, 2012. Murray Rust, P. 2014. blog dated February 2014. National Science Foundation (NSF). 2008. Science and Engineering Indicators 2008. Accessed 16 October 2013. http://www.nsf.gov/statistics/seind08 dot National Science Foundation (NSF). 2010a. “Key Science and Engineering Indicators.” National Science Board. 2010. National Science Foundation (NSF). 2014. “Key Science and Engineering Indicators.” National Science Board. 2014. Neilsen, M. 2009. Is Scientific Publishing about to Disrupted? Accessed May 26, 2015. http:// michaelnielsen.org/blog/is-scientific-publishing-about-to-be-disrupted/ dot Neilsen, M. 2011. Reinventing Discovery: The New Era of Networked Science. Princeton, NJ: Princeton University Press. Newton, I. 1668. “The Mathematical Papers of Isaac Newton – 1667-1670.” Cambridge University Press, 1668 dot
412 Bibliography
Nicholas, D. 2010a. “The Virtual Scholar: The Hard and Evidential Truth.” In Digital Library Futures, Munich, Germany: K.G. Saur Verlag, 23–32. Nicholas, D. 2010b. “The Behaviour of the Researcher of the Future (the ‘Google Generation’).” Art Libraries Journal, 35 (1): 18–21. Nicholas, D. 2014. “The Google Generation, the Mobile Phone and the ‘Library’ of the Future: Implications for Society, Governments and Libraries.” In ICOLIS 2014, edited by A. Noorhidawati. Kuala Lumpur: DLIS, FCSIT, 1–8. Nicholas D, Clark D. 2013a. “Information seeking behaviour and usage on a multi-media platform: case study Europeana.” in Progress and Research in Library and Information Study. Springer, Winter, 2013. Nicholas, D., and D. Clark. 2013b. “Social Media Referrals on a Multi-media Platform: A Case Study of Europeana.” Journal of Documentation, Information & Knowledge, 0 (6): 9–22. Nicholas, D., D. Clark, H.R. Jamili, and A. Watkinson. 2014. “Log Usage Analysis: What it Discloses about Use, Information Seeking and Trustworthiness.” International Journal of Knowledge Content Development & Technology, 4 (1). http://ijkcdt.net/_common/do.ph p?a=full&b=12&bidx=193&aidx=2309 dot Nicholas, D., D. Clark, and I. Rowlands. 2010c. “Unique Perspectives On User Behaviour for Multi-media Content: Case Study Europeana.” Online Information Conference Proceedings. London, December 2010, 127–135. Nicholas, D., I. Rowlands, D. Clark, and P. Williams. 2011. “Google Generation II: Web Behaviour Experiments with the BBC.” Aslib Proceedings, 63 (1), 28–45. Nicholas, D., I. Rowlands, M. Jubb, and H.R. Jamali. 2011. “The Impact of the Economic Downturn on Libraries: With Special Reference to University Libraries.” The Journal of Academic Librarianship, 36 (5): 376–382. Nicholas, D., I. Rowlands, A. Watkinson, D. Brown, and H.R. Jamali. 2012. “Digital Repositories Ten-Years On: What Do Dcientific Researchers Think of Them and How Do They Use Them?” Learned Publishing, 25: 195–206. doi:10.1087/20120306 dot OCLC. 2007. “Libraries and the Internet of Things.” OCLC Issue 24. https://www.oclc.org/publi cations/nextspace/issues.en.html dot OCLC. 2012. “Libraries: A Snapshot of Priorities and Perspectives.” http://www.oclc.org/reports dot Ofcom. 2014. “Communications Market Report: Converging Communications Markets”, UK: Ofcom. Office of National Statistics (ONS). 2010. “U.K. Gross Domestic Expenditure on Research and Development 2008.” Statistical Bulletin. ONS, London. March 2010. Office of National Statistics (ONS). 2011. Labour Force Survey. LFS Data Service. O’Hara, K. 2004. Trust: From Socrates to Spin. Cambridge: Icon Books. Olds, J. 2008. “Science Rebooted: What the Trend Towards On-Line Collaboration Might Mean for a Journal Like the Biological Bulletin.” The Biological Bulletin, 215: 1–2. Outsell, 2007. Outline of research proposal. Outsell UK, London: 2007. dot Outsell. 2008. Document Delivery – Best Practices and Vendor Scorecard. Burlingame, CA, USA: Outsell Inc. Outsell. 2009. Workflow: Information’s New Field of Dreams? Outsell Inc. Outsell. 2011. “Scientific, Technical & Medical Information: 2011 Market Forecast and Trends Report.” M.Ware for Outsell Inc. USA. November 22, 2011. Outsell. 2013. Open Access: Market Size, Share, Forecast, and Trends. Outsell Inc. Padley, R. 2014. “Wake Up to What the ‘Article of the Future’ Is Really All About.” Semantico blog. Accessed May 26, 2015. http://www.semantico.com/2014/03/ dot
Bibliography
413
Pareto V, Page A 1971. Translation of Manuale di economia politica (‘Manual of political economy’) Kelley A.M. ISBN 978-0-678-00881-2 dot Park, W. 2009. “The Publishing World is Open, Not Flat.” Learned Publishing, 23 (4): 326–328. Park, W. 2010. “Capturing the iUser: Web 2.0. ‘Freemium’ Business Models.” Presentation at STM Association meeting at Frankfurt Book Fair. October 2010. Park, W. 2011. Freemium (was: Persee in Danger). Email to [email protected], March 17, 2011 dot PEER. 2012a. Final Report – PEER Economics Research. Milan, Italy: ASK Research Centre, University of Bocconi. PEER. 2012b. PEER Behavioural Research: Authors and Users vis-a-vis Journals and Repositories. LISU and Loughborough University. Pickard, A.J. 2013. Research Methods in Information. 2nd edition. London: Facet Publishing. Plutchak, T.S. 2007. “What’s a Serial When You’re Running on Internet Time?” The Serials Librarian, 52 (1/2): 79–90. Polanyi, M. 1962. “The Republic of Science – Its Political and Economic Theory.” Minerva, 1: 54–74. Porat, M.U. 1977. The Information Economy: Definition and Measurement. Washington, DC: Office of Telecommunications. Porter, M.E. 1980. Competitive Strategy: Techniques for Analyzing Industries and Competitors. New York: The Free Press. Poynder, R. 2011a. “Interview with Derk Haank” Information Today, January 14, 2011. Poynder, R. 2012b. “Reed Elsevier: A Short History of Two Days in July (and Why Investors Should Care).” Interview with C. Aspesi, Bernstein Research. 2012. Poynder, R. 2014. “Interview with Jean-Gabriel Bankier, President and CEO, bepress.” In Open and Shut, April 2014. http://poynder.blogspot.co.uk/2014/04/interview-with-jean-gabrielbankier.html dot Price, D. de S. 1963. Little Science, Big Science. New York: Columbia University Press. Procter, R., R. Williams, J. Stewart, M. Poschen, H. Snee, A. Voss, and M. Asgari-Targhi. 2010. “Adoption and Use of Web 2.0 in Scholarly Communications.” Philosophical Transactions of the Royal Society A. Accessed May 26, 2015. http://rsta.royalsocietypublishing.org/ content/368/1926/4039 dot Publishing Technology/Bowker Market Research. 2014. Academic and Trade Online Communities. Pullinger, D., and C. Baldwin. 2002. The SuperJournal Project – Electronic Journals and User Behaviour. British Library R&D report, No. 61: 2002. Read, M. 2010. “Open Access for UK Research – JISC’s contributions”. Jisc. 2011. Rightscom. 2009. Overcoming Barriers: Access to Research Information Content. London: Research Information Network. Accessed May 26, 2015. http://www.rin.ac.uk/barriers-access dot RIN. 2011. Access to Scholarly Content: Gaps and Barriers. (see Rowlands, 2011) dot RIN. 2012. Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications. Report of the Working Group on Expanding Access to Published Research Findings. [The ‘Finch Report’] Accessed May 26, 2015. http://www.researchinfonet.org/wp-content/ uploads/2012/06/Finch-Group-report-FINAL-VERSION.pdf dot RoMEO. 2009. Publisher Copyright Policies and Self Archiving. Accessed December 2009. http://www.sherpa.ac.uk/romeo/ dot Rosenbaum, S.E., C. Glenton, and J. Cracknell. 2008. “User Experiences of Evidence-Based Online Resources for Health Professionals: User Testing of The Cochrane Library.” BMC
414 Bibliography
Medical Informatics and Decision Making, 8: 34. Accessed May 26, 2015. http://www.bio medcentral.com/1472-6947/8/34 dot Rowlands, I., and D. Nicholas. 2005. “New Journal Publishing Models – An International Survey of Senior Researchers.” London: CIBER, University College London. Rowlands, I., and D. Nicholas. 2011. “Access to Scientific Content: Gaps and Barriers.” London: CIBER, University College London. For RIN. Accessed May 26, 2015. http://www.google. co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCYQF jAA&url=http%3A%2F%2Fwww.rin.ac.uk%2Fsystem%2Ffiles%2Fattachments%2Fgaps_ final_report_low_res.pdf&ei=grBkVeS9IYK1Ucv4gMAE&usg=AFQjCNHeleFLeTE8qpajMQ baHcmAJNoM7A&bvm=bv.93990622,d.d24 dot Rowlands, I., D. Nicholas, and P. Huntingdon. 2004. “Scientific Communication in the Digital Environment: What do Authors Want? – Findings of an International Survey of Author Opinion, Project Report.” London: CIBER, University College London. Rowlands, I., D. Nicholas, B. Russell, N. Canty, and A. Watkinson. 2011. “Social Media Use In the Research Workflow.” Learned Publishing, 24 (3): 183–195. Rowlands, I., D. Nicholas, P. Williams, and D. Clark. 2010. “Google Generation Research: A Report.” CIBER. Accessed May 26, 2015. http://www.ucl.ac.uk/infostudies/research/ciber dot Rowlands, I., D. Nicholas, P. Williams, P. Huntington, M. Fieldhouse, B. Gunter, R. Withey, H. Jamali, T. Dobrowolski, and C. Tenopir. 2008. “The Google Generation: The Information Behaviour of the Researcher of the Future.” Aslib Proceedings, 60 (4): 290–310. Royal College of Nursing. 2013. “Information needs of Nurses”. London: 2013 dot SCONUL Access is a service offered by SCONUL and as such is a slash page on their web site. http://www.sconul.ac.uk/sconul-access. Shellenbarger, S. 2003. “Multitasking Makes You Stupid.” Wall Street Journal, February 27. Shirky, C. 2008. Here Comes Everybody – How Change Happens when People Come Together. New York: Penguin Books. Shirky, C. 2010. Cognitive Surplus: Creativity and Generosity in a Connected Age. New York: Penguin Books, 2010. Small, H. 2006. “Tracking and Predicting Growth Areas in Science.” Scientometrics, 68 (3): 595–610. Smith A. 1776. “An Inquiry into the Nature and Causes of the Wealth of Nations” Published by Strahan and T Cadell, The Strand, London. 1776. Suber, P. 2012. Open Access. Cambridge, MA, USA: MIT Press. Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. New York: Doubleday. Swan, A. 1999. “What Authors Want: The ALPSP Research Study on the Motivations and Concerns of Contributors to Learned Journals.” Learned Publishing, 12 (3): 17–172. Swan, A., and S. Brown. 2002. Authors and Electronic Publishing: The ALPSP research Study on Authors’ and Readers’ Views of Electronic Research Communication. Worthing: ALPSP. Swan, A., and S. Brown. 2003. “Authors and Electronic Publishing: What Authors Want from the New Technology.” Learned Publishing, 16 (1): 28–33. Swan, A., and S. Brown. 2005. Open Access Self-Archiving: An Author Study. Truro, Cornwall: Key Perspectives Ltd. Accessed December 2009. http://www.keyperspectives.co.uk/ dot Swoger, B. 2014. “Does the scientific journal have a future?.” Scientific American. 18 June 2014. Tapscott, D. 1998. Growing Up Digital: The Rise of the Net Generation. New York City, NY, USA: McGraw-Hill.
Bibliography
415
Tapscott, D. 2008. Grown up Digital: How the Net Generation is Changing Your World. New York City, NY, USA: McGraw-Hill. Tapscott, D., and A.D. Williams. 2006. Wikinomics – How Mass Collaboration Changes Everything. New York: Portfolio. Taylor, M. 2010. The Scientific Century: Securing Our Future Prosperity. Advisory group for The Royal Society. Chaired by Sir Martin Taylor. TBI Communications. 2014. Learned Society Attitudes towards Open Access. Accessed May 2014. http://www.edp-open.org/images/stories/doc/EDP_Society_Survey_May_2014_FINAL.pdf dot Tenopir, C., and D.W. King. 2000. Towards Electronic Journals: Realities for Scientists, Librarians, and Publishers. Washington, DC: Special Libraries Association. Tenopir, C., and D.W. King. 2004. Communication Patterns of Engineers. New York: IEEE Press, Wiley Interscience. Tenopir, C., D. Nicholas, et al. 2014. Study on How Researchers Assign and Calibrate Authority, funded by Sloan Foundation, USA, February 2014. Toffler, A. 1970. Future Shock. New York: Random House. UK Department of Business, Innovation & Skills. 2009. The Sectoral Distribution of R&D, 2009 Scoreboard. London: BIS. UK Department of Business, Innovation & Skills. 2011. Innovation and Research Strategy for Growth. London: BIS. UK House of Commons, Science and Technology Committee, Select Committee of Enquiry on Scientific Publications. 2004. Scientific Publications: Free for All? Tenth Report of Session 2003–04. Accessed December 2009. http://www.publications.parliament.uk/pa/cm200304/cmselect/ cmsctech/399/399.pdf dot UK Office of Fair Trading. 2002. The Market for Scientific, Technical and Medical Journals – A Statement by the OFT. London: OFT. UK Parliament. 2013. House of Lords Science and Technology Committee, 2013. UK Government Open Access Policy. London: February 22, 2013 dot United Nations Educational, Science and Cultural Organisation. 2010. UNESCO Science Report 2010 – The Current Status of Science Around the World. Paris, France: Unesco Publishing. USA, Congress of the United States. 2009. “Fair Copyright in Research Works Act (FCRWA).” S.1373. Washington, DC: USGPO Bookstore. van de Velde, E. 2013. “Open Access Politics”. Blog dated March 26, 2013. Velterop, J. 2014. “What is holding back the transition to open access if it does not cost more?”. November 2014. blog.scielo.org/en/2015/09/10/what-is-holding-back-the-transition. dot Wallace, J.E. 2011. “PEER: Green Open Access – Insight and Evidence.” Learned Publishing, 24 (4): 267–277. Ware, M. 2005. Online Submission and Peer Review Systems. Bristol: Mark Ware Consulting. Ware, M., 2009. Access by UK Small and Medium-Sized Enterprises to Professional and Academic Information. Publishing Research Consortium. Ware, M. 2012. Scientific, Technical and Medical Information: 2012 Final Market Size and Share Report. Outsell. Accessed November 2012. http://www.outsellinc.com/store/products/1107scientific-technical-medical-information-2012-market-size-share-forecast-and-trend-report. Ware, M., and M. Mabe. 2009a. The STM Report – An Overview of Scientific and Scientific Journal Publishing. The Hague: International Association of STM Publishers. Ware, M., and M. Mabe. 2012. The STM Report – An Overview of Scientific and Scientific Journal Publishing. 3rd Edition. The Hague: International Association of STM Publishers.
416 Bibliography
Weinberger, D. 2007. Everything Is Miscellaneous: The Power of the New Digital Disorder. New York: Times Books, Henry Holt and Company. Weinberger, D. 2012. Too Big to Know: Rethinking Knowledge Now that the Facts Aren’t the Facts, Experts are Everywhere, and the Smartest Person in the Room is the Room. New York City, NY, USA: Basic Books. Weller, M. 2011. The Digital Scholar: How Technology is Transforming Scientific Practice. London: Bloomsbury Academic. Weller, M.J., and J. Dalziel. 2007. “Bridging the Gap between Web 2.0 and Higher Education.” In Handbook of Research on Social Software and Developing Community Ontologies, edited by S. Hatzipanagos and S. Warburton. London: IGI UK Global. p. 466–78. Wilson, F. 2006. “My Favorite Business Model.” AVC Blog dated March 23, 2006. Accessed May 26, 2015. http://avc.com/2006/03/my_favorite_bus/ dot Wilson, J.Q. 1989. Bureaucracy: What Government Agencies Do and Why They Do it. New York City, NY, USA: Basic Books Classics. Wolf, M. 2007. Proust and the Squid – The Unforeseen Consequences of the Transition to the Digital Era. New York: Harper Collins. Zalta & Nodelman, 2010. ‘Funding Models for Collaborative Information Resources’ The Stanford Encyclopedia of Philosophy Experience. Information Standards Quarterly, Fall 2010, 22(4): 15–16. doi:10.3789/isqv22n4.2010.0 dot Zickuhr, K., and A. Smith. 2012. “Digital Differences.” Pew Research Internet project, April 13, 2012. http://www.pewresearch.org dot Zudilova-Seinstra, E. 2013. “Designing the Article of the Future.” Elsevier Connect. Posted on January 16, 2013. Accessed December 2013. https://www.elsevier.com/connect/design ing-the-article-of-the-future dot
Index Abstract & Indexing Services (A&I) 108–110, 327 Academia 2, 3, 9, 12, 19, 22, 33, 37, 48, 51, 53, 56, 57, 84, 95, 96, 105, 108, 133, 145, 150, 161, 175, 182–183, 200, 202, 231, 237–239, 250–252, 255, 259–260, 267–268, 273, 279–281, 287, 296, 307, 338, 341, 384, 395, 397, 398, 402 Academia.edu 39, 40, 242 Academic Publishing in Europe (APE) 215 Access to Research 9, 11, 15, 87, 130, 150, 177, 208, 230, 252, 280, 284–285, 292, 295, 303, 307, 365, 374, 398, 402 Ackoff (Russell) 32 Affiliated 3–5, 12, 19, 83, 143–145, 150, 207, 249, 258, 263, 270, 282–283, 379, 383 Allen Brain Atlas 95, 193, 252 ALPSP 39, 100, 300, 304–305 Allington (Daniel) 19, 89–91 Alumni 65, 210, 252, 325–326, 337 Amazon 45, 110, 120, 126, 159–160, 165, 169, 177, 188, 216, 232, 296, 379, 383 Ambient Findability 117–118 American Chemical Society (ACS) 217, 297, 326 American Publishers Association (APA) 340 Amin (Mayer) 34 Anderson (Chris) 44, 53, 159, 161, 331, 341, 381 Anderson (Rick) 67, 91, 103, 206, 317 Apple 85, 114, 160, 174, 402 API 222, 391 Article of the Future 214–215 Article Processing Charge (APC) 68, 86, 304, 343, 347–348 arXiv 84, 95, 147, 212–213, 329–331, 340, 345 Aspesi (Carl) 388 Association of Research Libraries (ARL) 212, 316, 376 Baldwin (Christine) 305 Barriers 68, 200, 238, 246, 282 Battelle (John) 112–113 BBC 117, 125, 197, 292 Beckett (Chris) 349
Bell (Alice) 90 Bell (Daniel) 31–32 Bernstein Research 388 Bath Information Data Services (BIDS) 239 Big Deals 84–86, 91, 95, 156, 306, 316–318, 326, 335, 376 Big Science 3, 23, 227, 296, 384, 397 BioOne 304 bioscientifica 304 Blogs 3, 14, 24, 75, 77, 89, 102, 108, 113, 119, 127, 164, 197, 206, 213, 215–216, 328 Blown to bits 110 BioMed Central (BMC) 342, 348, 361 bouncing 115, 142 Brain Research 100 Brand (Stewart) 336 BRIC 25, 37, 96 Brienza (Casey) 91 Brin (Sergey) 111, 167 British Library 8, 76, 114, 117, 136, 143, 207, 236, 323, 333 British Medical Journal (BMJ) 223 Brown (Andrew) 87–88, 197 Brown (Patrick) 347 Brown (Tracey) 236 Caltech 109 California University 132, 318, 347, 387 Cambridge University 81, 87, 92, 166 – Cambridge University Press (CUP) 326 Canadian Research Knowledge Network (CRKN) 326 Cap Gemini UK 276 Carr (Nicholas) 54, 73 CBD Directory 13, 298 Center for Studies in Higher Education (CSHE) 132–133 CERN 127, 227, 292 Charleston Observatory 103, 180 China 25, 27, 37, 101, 112, 187, 191, 207, 231, 241, 249–250, 393 Clearinghouse for Open Research in the U.S. (CHORUS) 375–377
418 Index
CIBER 68, 103, 114, 116–117, 123, 133, 134, 142–143, 145, 149, 174, 177–178, 180, 217–218, 225, 238–239, 246, 321–322, 349, 358 CiteULike 27, 226 Citizen Cyberscience Centre 292 Citizen Science 179, 288–289, 293 City University 142 CLOCKSS 375 Coase 171 Coase’s law 126 collaboration 3, 23, 118, 130, 151, 168, 286 Collaboratories 127–128 Collection budget 26 Cognitive computing 75 Cognitive surplus 156, 173, 291 communities 5, 57, 81, 91, 99, 112, 222–224 Corbyn (Zoe) 81–82 Cornell University 43, 82, 147, 330 Cornyn (John) 372 Counter 80, 121, 163, 340, 386, 388 Cox (& Cox) 326 Creative Commons 76, 220, 360, 370, 400 CrossRef 341, 380 crowd sourcing 21, 144, 199, 200, 398 CUDOS 33 DataCite 24, 227 Davis (Phil) 82, 149 DeepDyve 27, 55, 95, 271, 284, 321, 333, 337, 380 Democratisation 29, 49, 53, 57, 96, 125, 163–165, 167, 240, 290 demographics 5, 116, 143, 166, 192, 227, 384 Denmark 284, 285, 333, 351 Dezenhall (Eric) 359 Designed serendipity 168, 286 Deutsche Bank 84 Digg 106, 169, 235 Digital consumers 3, 22, 219, 288 Digital Object Identifier (DOI) 225, 227, 341 Digital natives 117, 139, 201, 380, 397 Digital rights management (DRM) 225–227, 127, 229 DIKW (data, information, knowledge, wisdom) 32
Disenfranchised 2, 3, 4, 9, 51, 167, 251, 255, 271, 307, 334, 392 Directory of Open Access Journals (DOAJ) 98, 351 Document delivery 207, 320, 336 Double dipping 71, 356, 368, 400 Drucker (Peter) 30, 198, 253 Dunbar (Robin) 175 Dunning (Alastair) 236 Dysfunctional 17, 93, 152 Eagleman (David) 54 ebay 126, 177, 188, 232 Economies of Scale 42–43, 162–163, 302, 304, 311 Economists 277 EDP Open 303 epidemics 157–158 Eisen (Michael) 85, 347, 375 Electronic Publishing Services (EPS) 77 Elsevier – Elsevier ScienceDirect 67, 93, 213, 328 – Elsevier Scirus 24, 67, 109, 114, 206, 340 Emerald 66, 208 EndNote 182 Endocrinology 304–305 Endowments 330 Engineers 148, 187, 243, 252, 278 e-Legal Deposit 71 Ernst and Young 239 Esposito (Joseph) 90, 234, 330 e-theses 14, 101 Europeana 144, 218 Eurostat HRST 334 Evans and Wurster 110 Exane BNP Paribas 66, 96 Facebook 3, 14, 50, 51, 119, 139, 175, 185, 232 Federal Research Public Access Act (FRPAA) 372–373 Fair Access to Science and Technology Research (FASTR) 372–374 Faxon Institute 135–136 Fenner (Martin) 93 Figshare 14, 51, 200, 213, 120 Finch (Janet) 102, 364 Filtering 33, 47, 73–74
Index
Fit for purpose 17, 57, 65, 78, 127, 395 Flickr 162, 172, 187 Frantsvag (Jan Erik) 98 Freemium 161–162 Friend (Fred) 317, 342 Frontiers 45, 230 FundRef 375 Frustration gap 61–63 Future Shock 33, 46 Gaps and Barriers 19, 67, 358 Galaxy Zoo 167, 200 Galton (Francis) 164 Gardner (Tracey) 108 Gartner – Gartner Research 71, – Gartner Hype Life Cycle 71 – Scholar 72 Gates (Bill) 118 Genbank 76, 95, 196 Gilder’s law (George Gilder 49 Ginsparg (Paul) 329, 345 Gladwell (Malcolm) 44, 55, 157 Gold OA 200, 304, 366, 369, 371 Google – Mantra 111–114 – Generation 114–117 – Google+ 111, 185, 232 – Google library digitisation 111 Gowers (Timothy) 85–87 Green OA 349, 367 Greenfield (Susan) 126 Grey OA 343-345 Grey literature 343–345 Haank (Derk) 317, 385 Hank (Carolyn) 216 haplotype 193, 252 Hardin (Garrett) 155 Hargreaves 221, 368 Harij (Ramesh) 276 Harris 359 Harvard University 104, 317 Higher Education Funding Council England (HEFCE) 27, 86, 371, 392 Higher Education Institute (hei) 27, 64 Higher Education Policy Institute (HEPI) 130
419
Higher Education Statistics Agency (HESA) 5, 19 HighWire 304 Hindawi 38, 348 Hippocampus 54 Horrigan 116 Houghton (John) 333, 350, 367 House of Lords Inquiry 370 Hubs 213, 308, 382, 399–401 Hybrid OA 304, 357, 400 hyperauthorship 128 ICSTI 231 Ideogoras 119 Incognito 54 India 37, 54 Information Economy 29, 30–32, 77 Information Overload 28, 47–48 Information Society 30–48 Information usage 144, 177 Ingenta – Ingenta Institute 242, 323 Inger (Simon) 108 Initial Public Offering (IPO) 112 InnoCentive 90, 119, 126, 167 Institutional Repositories (IRs) 71, 299, 346 Interactivity 214 Intellectual property (IP) 3, 34, 37 InterLibrary Loans (ILL) 284, 322 International Data Corporation (IDC) 107–108 Internet 13–14, 16–17, 44, 50–53 iPhone 174, 217 Ithica 309 Jeffrey (Keith) 150 Joint Information Services Committee (Jisc) 65, 181, 310 IRel 326 JSTOR 199, 304 Kahneman (Daniel) 76 Kanter 276 Kasparov (Garry) 167 Keen (Andrew) 120, 165 King (Christopher) 128 King (Donald) 140, 147 Knovel 94, 278–279 Knowledge Economy 30, 263
420 Index
Knowledge workers 2–4, 6, 9 Krotoski (Aleks) 125 Krugman 278 Lanier (Jaron) 122 Large Hadron Collider (LHC) 220 Learned Societies 12–13, 305–309 Lieberman (Matthew) 55 Linacre (Cathy) 308 LinkedIn 3, 14, 24, 51, 119 Listservs 82 LISU 60 Lloyd (William Forster) 155 LOCKSS 375 Long tail 5, 39, 40, 44, 53 Los Alamos National Laboratory (LANL) 127 Mabe (Michael) 339 Machlup (Fritz) 30, 31 Maguire (Eleanor) 54 Manchester University 81 Mandates 196, 352–355 Mandavilli 328 Mansfield 277 Mashups 75, 119, 382 Massive Open Online Courses (MOOCs) 27, 57 maven 135, 157 Maxwell (Robert) 99 McKinsey 73, 220 McLuhan (Marshall) 122 Mendeley 187–190 Merger and Acquisitions (M&A) 43, 163 Merton (Robert) 33 Metcalfe’s law (Robert Maxwell) 172–173 Micropayments 296, 402 Microsoft 114, 186 Mining – Text 134, 219–222 – Data 219–222 Mixed methods research (MMR) 18 Monbiot (George) 82–85 Moore’s Law (Gordon Moore) 49 Morris (Sally) 354 Morville (Peter) 117 Mounce (Ross) 203 Multitasking 73 Mumford (Lewis) 165
Murdoch (Rupert) 83 Murray Rust (Peter) 81, 92 MySpace 119, 187 Myths 75 National Health Service (NHS) 209 National Institutes of Health (NIH) 195, 226 National Science Board (NSB) 241, 248 National Science Foundation (NSF) 147 Nautilus 90, 234–235 NCBI 227 Neilsen (Michael) 50 Nesli2 325–326 Net Geners 22 Networked (science) 50 Network (and multiplier) effect 29 Neuro-plasticity 54 neurotransmitters 123 New Alexandrians 119 Netscape Communications 111 N-fluence networks 175 Nicholas (David) 174 Nurse (Paul) 197 Nurses 277 OAPEN 333 OAP-PMH 346 Open Access Implementation Group (OAIG) 354 OCLC Inc 116, 225, 390 OECD 126 OfCom 138–140 Office for National Statistics (ONS) 19 O’Hara (Kieran) 176 Open Access 237, 270, 303-305, 340, 363 Open University 89 Openness 26, 340–341 Office of Science and Technology Policy (OSTP) 228, 373 ORCID 342 Outsell 77–79 Oxford University – Oxford University Press (OUP) 208, 326 Padley (Richard) 215 Page (Larry) 167
Index
PageRank 112 Pareto Principle 45 Patients 9, 236 Patients Participate! 236 Park (William) 55 Pay-per-view (PPV) 134, 283–285, 337 Perfect storm 309, 399 Pergamon Press 99, 303 Pew Internet and Life Project 116 Pew Research 116, 170 Physics 147 Pickard (Alison Jane) 18 Plutchak (T. Scott) 389 Polymath 95 Porat (Marc) 30 Portal Periodicos Capes 326 Portals 309 PORTICO 375 Postpositivism 18 Poynder (Richard) 366 Price (Derek de Solla) 366 Price (Richard) 242 Partnership for Research Integrity in Science and Medicine (PRISM) 113 Professions 10 Promiscuous user 174 Prosumers 119 Public Library of Science (PLoS) 85, 347 Public libraries 208–209 Publishers Association (PA) 350, 359 Publishing and the Ecology of European Research (PEER) 359 – Ask Research 97 – CIBER Research 142–143 – LISU 60 Publishing Research Consortium (PRC) 307, 354 Publishing Technology Ltd 320 PubMed 109, 117, 345 PubMedCentral (PMC) – Europe PubMedCentral 195, 283 Pullinger (D) 136 Reed’s law 172 Research Assessment Exercise (RAE) 141 RCUK 87, 221, 228
421
Reach 109, 190–191 Read (Malcolm) 81–83 Research Excellence Framework (REF) 27 Registry of Open Access Journals (ROAR) 352 Researchers of Tomorrow 181 ResearchGate 3, 14, 94, 213 Research Information Network (RIN) 68 Research Works Act (RWA) 359 reviews 15, 212 Rewiring the brain 54–55 Rightscom 141–142 Registry of Open Access Repositories (ROAR) 352 Robust 100 Roth (Dana) 109 Rowlands (Ian) 349 Royal Society (RS) 14, 197 RSS 74, 295 Russell (Narissa) 43 SAGE 210 Sale (Arthur) 242 Science Commons 76 Schmidt (Eric) 114 Scholarly Kitchen 103 ScienceOpen 341 Science, Technology Facilities Council (STFC) 150 Science Watch 128 Science 2.0 119 Sponsoring Consortium for Open Access Publishing (SCOAP3) 147 SCONUL – SCONUL Access 210 Sloan Digital Sky Survey (SDSS) 289 Selective Dissemination of Information (SDI) 295 Semantico 215 Serials crisis 64–65 Shallows (the) 22 Shared Access to Research Ecosystem (SHARE) 376–377 Shirky (Clay) 44 Small & Medium Enterprises (SMEs) 10, 16 Shieber (Stuart) 374 SHEDL 326 SKYPE 22, 126
422 Index
Slashdot 177 Smartphones 53, 109 Smith (Aaran) 138 Social Media – Social networking 116 Social Science Research Network (SSRN) 213 Sociology of Science 33–34 SPARC 60 Springer – Springer S&BM 38, 97 – Springer Nature 42 squirreling 115, 174 Stanford University 304 Stargazing 292 stickiness 158 STM Association 340 Suber (Peter) 356 SuperJournal 136–137 Surowiecki (James) 164 Swan (Alma) 284 Swets 386 Swoger (Bonnie) 92–93 Synapses 54, 123 Table of Contents (ToC) 321 Tablets 216–217 Tapscott (Don) 44 Taylor (Martin) 248 Taylor and Francis 316 TBI Communications 303 Tennessee University 145 The Economist 204 The Guardian 82 The Netherlands 351 Thomson Reuters 129 Thomson Scientific 129 Tipping point 44 Toffler (Alvin) 33, 47 Tragedy of the Commons 155–157 Tenopir (Carol) 177 Torvalds (Linus) 167 Tracz (Vitek) 348 Trust 370 Turnaways 321–322 Twigging phenomenon 45–46 Twitter 14, 91 Typology (users) 134–140
UK Russell Group 86 Unaffiliated Knowledge Workers (UKW) 3, 9, 20 United Kingdom Department of Business Innovation and Skills (UKDBIS) 36, 371 United Kingdom Office for Fair Trading (OFT) 44 United Kingdom Office of Library and Information Networks (UKOLN) 236 United Kingdom Research Councils (RCUK) 87 Universities 57–58 University College London (UCL) 54, 103, 142 Unesco 241 Urquhart (Donald) 207 Utah University 317 Valley of death 2, 69–70 Velterop (Jan) 203 Version of Record (VoR) 85 Viber 3, 232 Victoria University 350 VISTA 37–38 Virtual Revolution 124–125 Vividence 111 Walk-in-access 64, 324 Ware (Mark) 281 Watson 74 Web 2.0 108, 169, 206 Wellcome Foundation 82 Weller (Martin) 44 WhatsApp 139, 175 WHEEL 64 Wikinomics 118–120 Wikipedia 186 Wiley 94, 224 Willetts (David) 364 Wise (Alicia) 90 Webinars 3, 14, 75, 206 Weller (Martin) 232 Weinberger (David) 291 White House Memorandum 373–374 Wikis 119, 390 Williams (A.D.) 118 Wilson 274 Wired magazine 159
Index
Wisdom (of the crowd) 163–166 Wolf (Maryanne) 121 Worlock (David) 40 Workflow processes 224–226 Wyden (Ron) 372
Yahoo 186 YouTube 188 Zickuhr (Kathryn) 138 Zuckerberg (Mark) 167
423