Reporting Public Opinion: How the Media Turns Boring Polls into Biased News 3030753492, 9783030753498

This book is about how opinion polls are reported in the media. Opinions polls are not reported in the media as unfilter

107 25 7MB

English Pages 148 [143] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgments
Contents
List of Figures
List of Tables
1 Bringing Public Opinion to the Public: From Polls to Media Coverage
1.1 From Numbers to News
1.2 Outline of the Book: What You Will Find
1.3 Limitations: What You Will Not Find
1.4 Concluding Remarks
References
2 The Four Steps of Poll Coverage: Creating, Selecting, Reporting and Responding
2.1 Who Cares About Which Polls?
2.2 From Questions to Opinions: Four Activities
2.2.1 Creating a Poll
2.2.2 Selecting, Reporting, and Reacting
2.3 A Common Framework for All Steps
2.4 Concluding Remarks
References
3 Explaining How Media Outlets Select Opinion Polls: The Role of Change
3.1 Opinion Polls as News
3.2 Studying Opinion Polls in Denmark and the United Kingdom
3.3 Operationalisations of Change in Opinion Polls
3.4 Linking Opinion Polls to News Articles: Leaving Everything to Change?
3.5 How Change Matters for the Selection of Polls
3.6 Is the Significant Change Insignificant?
3.7 Concluding Remarks
References
4 Characteristics of Opinion Poll Reporting: Creating the Change Narrative
4.1 Reporting About Opinion Polls
4.2 Methodological Considerations in Poll Reporting
4.3 Measuring Change Narratives and Methodological Considerations
4.4 Reporting Change, Rather Than Statistical Uncertainty?
4.5 A Systematic Look at the Reporting of Methodological Details
4.6 Concluding Remarks
References
5 Reactions and Implications: How Do the Elite and the Public Respond to Polls?
5.1 Biased Responding to Polls
5.2 The Bandwagon Effects of Opinion Polls
5.3 Three Ways of Responding: Quotes, Shares and Retweets
5.3.1 Elite Comments, Reactions and Quotes in the Danish Poll Coverage
5.3.2 Sharing the Poll-Related News of the Guardian
5.3.3 Retweeting Poll Related Information
5.4 Concluding Remarks
References
6 Alternatives to Opinion Polls: No Polls, Vox Pop, Poll Aggregators and Social Media
6.1 No Polls
6.2 Vox Pops
6.3 Poll Aggregators
6.4 Social Media
6.5 Comparing the Alternatives
6.6 Media Reporting of Alternatives
6.7 Concluding Remarks
References
7 Conclusion: How the Media Could Report Opinion Polls
7.1 Understanding Opinion Polls in the Media
7.2 Democratic Implications: Change We Believe In?
7.3 Future Work
7.4 Final Conclusions
References
Recommend Papers

Reporting Public Opinion: How the Media Turns Boring Polls into Biased News
 3030753492, 9783030753498

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Reporting Public Opinion How the Media Turns Boring Polls into Biased News Erik Gahner Larsen · Zoltán Fazekas

Reporting Public Opinion

Erik Gahner Larsen · Zoltán Fazekas

Reporting Public Opinion How the Media Turns Boring Polls into Biased News

Erik Gahner Larsen Conflict Analysis Research Centre University of Kent Canterbury, UK

Zoltán Fazekas Department of International Economics, Government and Business Copenhagen Business School Frederiksberg, Denmark

ISBN 978-3-030-75349-8 ISBN 978-3-030-75350-4 (eBook) https://doi.org/10.1007/978-3-030-75350-4 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: kenkuza_shutterstock.com This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgments

We are grateful to Will Jennings for the provision of polling data from the UK, Department of Political Science at the University of Southern Denmark for financing the Danish data collection and coding and Palgrave Macmillan for giving us the opportunity to write a book about how opinion polls are reported in the media. We thank the friends and colleagues at the University of Southern Denmark, University of Oslo, University of Kent and Copenhagen Business School who, over the last seven years, have provided constructive feedback on the various stages of this project. Erik would like to thank Parisa for the support over the years. Zoltán would like to thank his family for the support and patience over the years.

v

Contents

1

2

3

Bringing Public Opinion to the Public: From Polls to Media Coverage 1.1 From Numbers to News 1.2 Outline of the Book: What You Will Find 1.3 Limitations: What You Will Not Find 1.4 Concluding Remarks References

1 1 8 9 10 11

The Four Steps of Poll Coverage: Creating, Selecting, Reporting and Responding 2.1 Who Cares About Which Polls? 2.2 From Questions to Opinions: Four Activities 2.2.1 Creating a Poll 2.2.2 Selecting, Reporting, and Reacting 2.3 A Common Framework for All Steps 2.4 Concluding Remarks References

13 14 15 16 20 20 24 25

Explaining How Media Outlets Select Opinion Polls: The Role of Change 3.1 Opinion Polls as News 3.2 Studying Opinion Polls in Denmark and the United Kingdom 3.3 Operationalisations of Change in Opinion Polls

27 28 32 36

vii

viii

CONTENTS

3.4

Linking Opinion Polls to News Articles: Leaving Everything to Change? 3.5 How Change Matters for the Selection of Polls 3.6 Is the Significant Change Insignificant? 3.7 Concluding Remarks References 4

5

6

Characteristics of Opinion Poll Reporting: Creating the Change Narrative 4.1 Reporting About Opinion Polls 4.2 Methodological Considerations in Poll Reporting 4.3 Measuring Change Narratives and Methodological Considerations 4.4 Reporting Change, Rather Than Statistical Uncertainty? 4.5 A Systematic Look at the Reporting of Methodological Details 4.6 Concluding Remarks References

40 42 46 47 49 53 54 56 61 64 71 77 78

Reactions and Implications: How Do the Elite and the Public Respond to Polls? 5.1 Biased Responding to Polls 5.2 The Bandwagon Effects of Opinion Polls 5.3 Three Ways of Responding: Quotes, Shares and Retweets 5.3.1 Elite Comments, Reactions and Quotes in the Danish Poll Coverage 5.3.2 Sharing the Poll-Related News of the Guardian 5.3.3 Retweeting Poll Related Information 5.4 Concluding Remarks References

89 95 99 104 105

Alternatives to Opinion Polls: No Polls, Vox Pop, Poll Aggregators and Social Media 6.1 No Polls 6.2 Vox Pops 6.3 Poll Aggregators 6.4 Social Media 6.5 Comparing the Alternatives 6.6 Media Reporting of Alternatives

109 110 112 113 114 115 117

83 84 86 88

CONTENTS

7

ix

6.7 Concluding Remarks References

119 120

Conclusion: How the Media Could Report Opinion Polls 7.1 Understanding Opinion Polls in the Media 7.2 Democratic Implications: Change We Believe In? 7.3 Future Work 7.4 Final Conclusions References

123 124 126 130 133 133

List of Figures

Fig. 1.1

Fig. Fig. Fig. Fig.

2.1 2.2 2.3 3.1

Fig. 3.2 Fig. 3.3

Fig. 3.4

Fig. 4.1 Fig. 4.2 Fig. 4.3 Fig. 4.4

Poll embargos in a comparative perspective, 133 countries. Note Poll embargos across the world from a joint WAPOR/ESOMAR 2017 survey fielded between July 11 and October 1, 2017 (Source Page 13 in ESOMAR and WAPOR [2018]) Number of opinion polls, Denmark, 2011, 2015 and 2019 Number of opinion polls, United Kingdom, 2000–2019 Theoretical illustration of cumulative effects Party support evolution based on all polls, Denmark, 2011–2015 Party support evolution based on all polls, United Kingdom, 2000–2019 Change in opinion polls leads to more mentions in the mass media in Denmark. Note Predicted values for +/− two standard deviations listed Change in opinion polls leads to more mentions in the mass media in the United Kingdom. Note Predicted values for +/− two standard deviations listed Change in the polls and horse race coverage, Denmark Change in the polls and horse race coverage, United Kingdom Methodological considerations (statistical uncertainty) in the poll coverage, Denmark Methodological considerations (statistical uncertainty) in the poll coverage, United Kingdom

6 17 18 22 33 34

44

45 67 69 71 73

xi

xii

LIST OF FIGURES

Fig. 4.5 Fig. 5.1 Fig. 5.2 Fig. 5.3 Fig. 5.4 Fig. 5.5 Fig. 5.6 Fig. 5.7

Reporting of methodological details, overview of results Change in the polls and reactions in the articles, Denmark Change in the polls and reactions by politicians in the articles, Denmark Share count and poll change in the Guardian Heterogeneity in share count and poll change in the Guardian Example of tweet on Westminster voting intention from @BritainElects Distribution of poll difference and retweets Retweet count and poll difference

76 91 92 96 98 101 102 103

List of Tables

Table Table Table Table Table

2.1 3.1 3.2 3.3 3.4

Table 3.5 Table 4.1 Table 4.2 Table 4.3 Table 4.4 Table 4.5 Table 4.6 Table 5.1 Table 5.2 Table 5.3

The four activities of opinion polls coverage Operationalisations of change and availability Selection as a function of change, Denmark Selection as a function of change, United Kingdom Change and article mentions when controlling for significant differences, Denmark Change and article mentions when controlling for significant differences, United Kingdom Guidelines with minimal requirements of disclosing methodological information Horse race coverage as a function of poll change in Denmark Horse race coverage as a function of poll change in the United Kingdom Uncertainty reporting as a function of poll change in Denmark Uncertainty reporting as a function of poll change in the United Kingdom Studies included in review of methodological details Probability of quoting a politician or expert as a function of poll change Probability of quoting a politician as a function of poll change Share count as a function of poll change in the Guardian

21 38 43 43 47 48 58 66 68 70 72 74 93 94 96

xiii

xiv

LIST OF TABLES

Table 5.4 Table 5.5 Table 6.1 Table 6.2

Share count as a function of poll change and horse race coverage in the Guardian Retweets as a function of poll difference Comparing the alternatives to polls The reporting on alternatives

97 102 116 119

CHAPTER 1

Bringing Public Opinion to the Public: From Polls to Media Coverage

1.1

From Numbers to News

Public opinion is a cornerstone of democratic politics. Luckily, over the last century, our ability to measure and understand public opinion in the form of opinion polling has improved significantly. Today, opinion polls provide reliable information on what the public believes. In contemporary democracies, however, such opinion polls do not come out of nowhere and once an opinion poll is conducted, the public is not simply presented a series of numbers in an unfiltered and descriptive manner. On the contrary, mass media plays a pivotal role in bringing information from opinion polls to the public with interpretations, narratives and qualitative assessments. This process is best described as turning relatively boring numbers into exciting news stories. This is not a simple or neutral process rather it is worth a book-length treatment. This book is a deep dive into the process of how opinion polls go from the commissioning of a new poll to the moment they reach the public in the news media and on social media. Despite the complexities and nuances in how opinion polls are reported, we demonstrate that there is a systematic component that can help us understand how exactly opinion polls are turned into news from numbers. The news coverage of opinion polls provides the public, pundits and politicians with information on the nature of political competition, shapes the public’s attitudes and matters for political outcomes (Ansolabehere & © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. G. Larsen and Z. Fazekas, Reporting Public Opinion, https://doi.org/10.1007/978-3-030-75350-4_1

1

2

E. G. LARSEN AND Z. FAZEKAS

Iyengar, 1994; Rothschild & Malhotra, 2014; Searles et al., 2018; van der Meer et al., 2016; Westwood et al., 2020). It is difficult to follow the political coverage without, indirectly or directly, being exposed to political polls and, unsurprisingly, political polls are among the most important news items in the political coverage. Exactly these opinion polls that look at the vote intention of a population are the central focus of this book. Recently, despite the ability of opinion polls to aptly measure public opinion, polls in the media have attracted substantial negative attention in several countries. Salient events, such as the Brexit referendum in 2016 and the presidential election in the United States in the same year, have for the sceptics and critics provided the final nail in the opinion polling’s coffin and showed that opinion polls cannot and should not be trusted or be part of the media coverage. Is the scepticism warranted? Are opinion polls to blame? As we demonstrate throughout this book, the picture is much more complex and a key issue with opinion polls has nothing to do with the polls themselves but the process of bringing them to the public. While there are important quality concerns to discuss, the more important task is to understand how opinion polls are used and turned into news stories or brought to the public via a series of decisions by media outlets. Polls are, on average, doing a good job in providing snapshots of what the public believes, but even the high quality of polls might not always guarantee a successful journey through the media. This then makes poll quality a necessary, but not sufficient condition for a correct understanding and perspective of the political competition. This is not to say that there are no low-quality opinion polls out there. There definitely are. However, what we show is that, even when looking exclusively at high-quality opinion polls, the media dynamics of how such polls are reported will lead to a discrepancy between what the polls show and what the reporting of polls implies. We shed light on the important role of mass media in bringing opinion polls to the public. This can help us understand why our perceptions of opinion polls are biased. Key to our theoretical framework is how journalists select and report polls and how several actors, including politicians, pundits and the broader public respond to the stories about polls that journalists write. This is not a linear process based on a single decision made by a journalist, but a lengthy process involving multiple decisions with distinct challenges taken by different actors. However, there is a

1

BRINGING PUBLIC OPINION TO THE PUBLIC…

3

guiding principle for this process, namely that there is a strong preference for change. We show that this is not a preference that is isolated to journalists, but also a preference we find in the public at large. We provide a framework to understand this process and demonstrate the challenges for the mass media in covering opinion polls (for the initial research that inspired this work, see Larsen & Fazekas, 2020). These challenges include, but are not limited to, whether an opinion poll is being covered at all, the reporting of important information on the limitations of specific polls, the interactions with politicians and experts in writing about polls, and how the opinion polls are being shared on social media. Empirically, we rely on several data sources from different countries. First, we use data on opinion polls. Importantly, whereas previous studies interested in the media coverage of opinion polls primarily look at opinion polls that are covered, we also rely on polls that were not covered by the news media. This enables us to better understand why some opinion polls are deemed newsworthy. Second, we look at the media reporting of opinion polls with the aim to understand why and how some opinion polls are covered using a specific angle and journalistic narrative. Third, we rely on metrics on how much attention opinion polls and their related coverage get from the public. That is, we leverage data that can help us understand why some opinion polls are more likely to go viral. These different types of data make it possible for us to provide an overview of how opinion polls are brought to the public through a series of decisions and, most importantly, how these decisions are interrelated, generating the narrative around public opinion representation. This is currently missing in the literature and the following chapters address this omission. A key challenge in studying the media coverage of opinion polls is that media systems differ across countries (Hallin & Mancini, 2004). For election coverage, Dimitrova and Strömbäck (2011), for example, show that different sources and media frames are used in the United States and Sweden. Thus, we need to be cautious when drawing conclusions about how opinion polls are being covered in different countries. For that reason, while we primarily rely on original data from Denmark and the United Kingdom, we also cover secondary research from across the world. In Denmark, a multiparty Western European democracy, there are no legal restrictions on the publication of polls (Pedersen, 2012), and poll coverage is widespread in newspapers (Bhatti & Pedersen, 2016; Pedersen, 2014). There is a relatively neutral political coverage and no

4

E. G. LARSEN AND Z. FAZEKAS

partisan leanings in the media outlets. Specifically, Denmark is characterised by a high newspaper circulation and a neutral commercial press (Hallin & Mancini, 2004). The United Kingdom, on the other hand, has a different political system with fewer parties, different electoral system and the press is characterised by more political polarisation. In this context, the press plays an important role in the political news coverage and it is often characterised by strategic selection and framing of the issues they report on (Scammell & Semetko, 2008). This allows us to examine whether the key trends and mechanisms identified in a Danish context generalise to other contexts. We are not the first and hopefully not the last to pay attention to how the news media covers opinion polls. There is a burgeoning literature on opinion polls in the media (Andersen, 2000; Bhatti & Pedersen, 2016; Groeling, 2008; Groeling & Kernell, 1998; Larson, 2003; Matthews et al., 2012; Paletz et al., 1980; Searles et al., 2016; Tryggvason & Strömbäck, 2018; Weaver & Kim, 2002), and we draw heavily on this literature in this book. Still, the studies on opinion polls often pay attention to one of the decisions in the chain of interrelated dynamics and, for that reason, do not encapsulate the various stages that paint the full picture of how the mass media cover opinion polls. In other words, the individual studies on opinion polls say a lot about specific issues related to the coverage of opinion polls but do not make inferences beyond the specific issue of interest. Here, we widen the theoretical and empirical scope and link the various studies in the literature together in order to provide an overview of how opinion polls are covered from their initial collection to the media coverage and sharing by the broader public. In doing this, we not only provide novel empirical findings on the coverage of opinion polls but also position and link the rich literatures on opinion polls within a unified framework. This overview can help us better understand how political polls are reported and explain why there is a discrepancy between what the opinion polls actually show and the information on opinion polls that is available to the public, without astounding inaccuracies in the bulk of factual reporting. One puzzle we look at in great detail is how opinion polls in the media coverage often put emphasis on change when, in fact, most opinion polls show little to no change over short periods of time. We thus look at similarities and differences in how key actors engage with opinion polls at different stages or whether journalists and the public

1

BRINGING PUBLIC OPINION TO THE PUBLIC…

5

make similar choices when they focus on what opinion polls are relevant to distribute. The central insight of this book is that the opinion polls circulated in the public do not reflect the opinion polls that are conducted. In other words, the opinion polls that people engage with are not representative of what all opinion polls show. Not all polls are reported, and the ones omitted from the media coverage are not a random sample of the polls conducted: they are the ones that show more stability rather than any change. Even more importantly, there is not a single point in the timeline of opinion polls that can explain this discrepancy in full on its own, since many journalistic considerations that drive the narrative in the reporting can also drive the selection: each step an opinion poll has to travel through from its commissioning to how the public engages with the poll amplifies certain characteristics of the opinion poll. To put it differently, there are similar considerations related to change that matter at different stages, and such choices add up and result in substantial discrepancies between what polls show and what is available to the public. As hinted at already, change in a broad sense, from volatility to deviation, is the common driving force behind how polls are picked up, described or engaged with by journalists, pundits, politicians and the public. That is, if a new opinion poll shows that there is nothing new, such a poll is more likely to be ignored, forgotten and discounted. If an opinion poll shows change, even if this change is not statistically significant or an outlier that no other polls can confirm, such a poll is more likely to take centre stage in the news coverage and the broader narrative. Accordingly, the principal effect we shed light on in this book is one akin to a snowball effect. A small, trivial change in an opinion poll will make the opinion poll more likely to be selected, and this change will be amplified further in the news reporting and, at the end of the day, the opinion poll will be more likely to get public attention. There are important reasons to focus on these dynamics. First, a large body of literature has demonstrated the implications of opinion polls for various outcomes related to democratic politics (we cover some of these studies in Chapter 5). That fact alone makes it paramount to understand how exactly opinion polls are being covered in the media and distributed within the public. Second, our focus helps explain existing puzzles in the literature and improve our understanding of why and when people might possess false beliefs about opinion polls. For example, one remaining puzzle in the literature that we set out to explain is the following: how

6

E. G. LARSEN AND Z. FAZEKAS

can opinion polls, on the one hand, show stability and have high predictive power to explain election outcomes, and on the other hand, show great volatility and be unable to meet all people’s expectations to what will happen in an election? The fact that opinion polls are important or at least politicians and legal scholars consider their implications decisive for the democratic process is best illustrated by the staggering number of countries that have legislated embargos for opinion polls. Figure 1.1 shows an overview of poll embargos around the world for different regions. We see here that there is no blackout period or poll embargo in 32% of the countries, but there is substantial variation across the regions. In Europe, for example, more than half of the countries have a blackout period of 1–6 days, meaning that opinion polls will not be part of the public conversation in the days up to an election. Even in the absence of any legal regulation, there is still a self-regulation by the survey industry in several countries: principles and rules to comply within how polls are conducted and reported. For additional information on the regulation of opinion polls in a comparative perspective, see Petersen (2012). The findings presented here provide a bigger challenge to how we consider the role of opinion polls in a contemporary media landscape. Africa

Asia, Middle East & Eurasia

Europe

Latin America

North America & Caribbean

Oceania

No election polls Blackout unknown Blackout 7+ days Blackout 1– 6 days No blackout

No election polls Blackout unknown Blackout 7+ days Blackout 1– 6 days No blackout 0%

20%

40%

60%

0%

20%

40%

60%

0%

20%

40%

60%

Fig. 1.1 Poll embargos in a comparative perspective, 133 countries. Note Poll embargos across the world from a joint WAPOR/ESOMAR 2017 survey fielded between July 11 and October 1, 2017 (Source Page 13 in ESOMAR and WAPOR [2018])

1

BRINGING PUBLIC OPINION TO THE PUBLIC…

7

This is less about the quality of opinion polls and ensuring that only high-quality opinion polls are conducted. Our analysis shows that even high-quality polls can be diluted to low-quality coverage. In other words, even in scenarios where opinion polls are doing well and following all state-of-the-art guidelines, how they are being used introduces certain challenges worthy of our attention. Our ambition with this book is not to change how opinion polls are conducted, but rather facilitate a better understanding among researchers, journalists and the public of how we should engage with such opinion polls and reflect upon how certain biases amplify what opinion polls we (do not) engage with in the first place. In doing this, we hope to move the focus from talking about biases in opinion polls to processes and biases in how we engage with opinion polls. This is not to say that there are no challenges with conducting opinion polls in 2021, in particular related to ensuring representative samples, but that there is an equal, if not more, important challenge: how to engage with opinion polls once they are conducted. The starting point is that opinion polls are not simply reported as opinion polls without any context or interpretations. If that was the case, the public would be presented with a series of numbers and some methodological details, analogous to a timetable at a bus stop. This would not be an ideal way to present opinion polls to the public and thus the media coverage of opinion polls will always have to make certain decisions about what polls to focus on and how to highlight them. What we do find is that the current way media outlets engage with opinion polls, in particular related to how they report on change, does not always comply with what the opinion polls are actually showing, and in most cases directly contradicts it. We are aware that there are many different ways in which journalists, pundits, politicians and the public can respond to and engage with opinion polls, but our hope is that whatever way these actors perceive opinion polls are as closely aligned to what the opinion polls actually show. We are also aware that journalists face tremendous challenges in making opinion polls newsworthy. This challenge is by no means made easy by the fact that most regularly conducted opinion polls show little to no newsworthy information at all. To address this challenge, this book helps us better understand the key decisions involved in selecting, reporting and sharing opinion polls that can facilitate a discussion of the role of opinion polls in media coverage. Consequently, we expand our understanding of these decisions by focusing on the various ways

8

E. G. LARSEN AND Z. FAZEKAS

journalists or the public can think of change, sometimes relying on simple comparisons, but other times also considering different parties and opinion polls at the same time. These operationalisations of change are mapped to a dimension of information availability that plays a key role in understanding how journalists decide regarding opinion polls. To summarise, public opinion polls can—to anybody but the most politically interested and informed citizens—be quite boring numbers in and by themselves, but the news coverage often presents these opinion polls in engaging, surprising and entertaining ways. We do not have an issue with that. On the contrary, we believe that good journalism should in fact consider the best possible ways to communicate opinion polls and make the numbers come alive. Notwithstanding, what we do identify as a key challenge is to ensure that this will not lead to a biased coverage of opinion polls that will skew our understanding of what public opinion actually looks like.

1.2

Outline of the Book: What You Will Find

In the next chapter, Chapter 2, we develop a framework that focuses on the temporal dimension of how opinion polls are brought to the public via the media. This chapter serves as an introduction to the different stages that opinion polls have to go through while also identifying the gaps to be addressed in the subsequent chapters. We use this to illustrate the cumulative effects that are present throughout the various stages which can explain the discrepancy between what opinion polls show and what is available to the public. Throughout Chapters 3, 4, and 5, we cover the stages of opinion polls in greater detail and show collectively how opinion polls are being turned into specific news stories. In Chapter 3, we focus on the selection of opinion polls. That is, we investigate what can explain whether journalists decide to report on an opinion poll or not. In Chapter 4, we target the reporting of opinion polls, which covers the news articles dedicated to the opinion polls that journalists have decided to report on. In doing this, we show how the selection and reporting of opinion polls are shaped by a similar preference for change. When introducing the idea of change, we dedicate extensive considerations to how we can best measure change and what the availability of these change measures means for the selection and reporting. In Chapter 5, we analyse the next natural stage in the life of opinion polls: how do politicians, experts and the public respond to

1

BRINGING PUBLIC OPINION TO THE PUBLIC…

9

them and to the stories written about them. Essentially, we delve into the implications of how these opinion polls are selected and covered. Here, we show that both elites and the broader public have a strong preference to engage with (respond to or share) opinion polls that show greater changes or support a well-defined change narrative. In Chapter 6, we turn our attention to the alternatives of the reporting of opinion polls. Here, we discuss how no opinion polls at all, poll aggregators, social media and vox pops can be seen as alternatives to opinion polls, and in particular what are their strengths and limitations. The ambition here is not to force the reader to decide whether opinion polls are good or bad, but rather to understand how alternatives to opinion polls can mitigate or amplify the biases introduced in the previous chapters. In Chapter 7, we conclude how the media might report on opinion polls by considering the trade-offs between what the polls often show and what journalists wish they showed. Specifically, we first set out to discuss the implications of the findings for how we understand the political coverage of opinion polls today and then discuss the most important questions to be answered in future work.

1.3

Limitations: What You Will Not Find

Despite our ambition to connect several studies on opinion polls in our theoretical framework, we made explicit choices to ensure that we focus on the most important topics in order to understand how opinion polls go from being boring numbers to biased news. Accordingly, and before moving to the next chapter, we should briefly mention what you will not be able to find in this book. First, for reasons we introduce later, our focus is primarily on vote intention polls (see also Larsen & Fazekas, 2020). We do not argue that such polls can fully capture all nuances of public opinion and therefore our conclusions can be rather modest in relation to what we can conclude about public opinion in general. Thus, we do not go into a discussion on the nature of “public opinion” as a concept, including questions of whether it makes sense to talk of a public opinion in the first place. We simply assume that public opinion is measurable using statistical techniques and principles. We encourage readers interested in broader debates about public opinion as a concept to consult Bishop (2005), Price (2008) and Splichal (2012).

10

E. G. LARSEN AND Z. FAZEKAS

Second, this is not a statistics book. While we introduce several concepts throughout the book required to understand the basics of why and how opinion polls work, we do not devote attention to topics such as sampling theory or advanced survey design, and we suggest that readers with an interest in such statistical topics related to opinion polling read Groves et al. (2009). While we interpret all statistical analyses throughout the book for the non-statistical audience, prior knowledge of the basic principles on how opinion polling works, including the aforementioned sampling theory, will make it easier to understand various points and discussions throughout the book. Third and final, we do not discuss the actual accuracy of opinion polls, including questions such as whether they are more accurate in 2021, or whether they face certain challenges when used to predict election outcomes. Previous research finds that opinion polls tend to predict election outcomes well as we get closer to election day (Jennings & Wlezien, 2018), and this pattern is strikingly similar across countries (Jennings & Wlezien, 2016). For more on using opinion polls to predict elections, see Kennedy et al. (2017). For an excellent review of the accuracy of polls, see Prosser and Mellon (2018). In brief, there is no systematic evidence showing that opinion polls today perform worse than in the past, and we are more interested in how good opinion polls are covered rather than understanding when opinion polls are good. While we do not provide a detailed analysis of the accuracy of polls, we return to some of the issues in Chapter 6, where we consider the alternatives to poll reporting by the media.

1.4

Concluding Remarks

This book offers a novel argument for why opinion polls, as presented to the public, are not representative of what most opinion polls show. We not only provide the theoretical framework to better understand how and why opinion polls that are available to the public are more likely to focus on change, despite most polls showing little to no change, but also demonstrate these dynamics empirically using several data sources and measurements from two different democracies covering several years of political reporting. In the next chapter, we turn to the theoretical framework, describe the mechanisms accounting for biases in reporting of opinion polls, and outline the expectations that will be examined empirically in Chapters 3 through 5.

1

BRINGING PUBLIC OPINION TO THE PUBLIC…

11

References Andersen, R. (2000). Reporting public opinion polls: The media and the 1997 Canadian election. International Journal of Public Opinion Research, 12(3), 285–298. Ansolabehere, S., & Iyengar, S. (1994). Of horseshoes and horse races: Experimental studies of the impact of poll results on electoral behavior. Political Communication, 11(4), 413–430. Bhatti, Y., & Pedersen, R. T. (2016). News reporting of opinion polls: Journalism and statistical noise. International Journal of Public Opinion Research, 28(1), 129–141. Bishop, G. F. (2005). The illusion of public opinion: Fact and artifact in American public opinion polls. Rowman & Littlefield Publishers. Dimitrova, D. V., & Strömbäck, J. (2011). Election news in Sweden and the United States: A comparative study of sources and media frames. Journalism, 13(6), 1–16. ESOMAR, and WAPOR. (2018). Freedom to conduct opinion polls: A 2017 worldwide update. https://wapor.org/publications/freedom-to-publish-opi nion-polls/. Groeling, T. (2008). Who’s the fairest of them all? An empirical test for partisan bias on ABC, CBS, NBC, and Fox News. Presidential Studies Quarterly, 38(4), 631–657. Groeling, T., & Kernell, S. (1998). Is network news coverage of the president biased? Journal of Politics, 60(4), 1063–1087. Groves, R. M., Fowler, F. J., Jr., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology. Wiley. Hallin, D. C., & Mancini, P. (2004). Comparing media systems: Three models of media and politics. Cambridge University Press. Jennings, W., & Wlezien, C. (2016). The timeline of elections: A comparative perspective. American Journal of Political Science, 60(1), 219–233. Jennings, W., & Wlezien, C. (2018). Election polling errors across time and space. Nature Human Behaviour, 2(4), 276–283. Kennedy, R., Wojcik, S., & Lazer, D. (2017). Improving election prediction internationally. Science, 355(6324), 515–520. Larsen, E. G., & Fazekas, Z. (2020). Transforming stability into change: How the media select and report opinion polls. The International Journal of Press/Politics, 25(1), 115–134. Larson, S. G. (2003). Misunderstanding margin of error network news coverage of polls during the 2000 general election. The International Journal of Press/Politics, 8(1), 66–80. Matthews, J. S., Pickup, M., & Cutler, F. (2012). The mediated horserace: Campaign polls and poll reporting. Canadian Journal of Political Science, 45(2), 261–287.

12

E. G. LARSEN AND Z. FAZEKAS

Paletz, D. L., Short, J. Y., Baker, H., Cookman Campbell, B., Cooper, R. J., & Oeslander, R. M. (1980). Polls in the media: Content, credibility, and consequences. Public Opinion Quarterly, 44(4), 495–513. Pedersen, R. T. (2012). The game frame and political efficacy: Beyond the spiral of cynicism. European Journal of Communication, 27 (3), 225–240. Pedersen, R. T. (2014). News media framing of negative campaigning. Mass Communication and Society, 17 (6), 898–919. Petersen, T. (2012). Regulation of opinion polls: A comparative perspective. In Opinion polls and the media. Palgrave Macmillan. Price, V. (2008). The public and public opinion in political theories. In The SAGE handbook of public opinion research. Sage Prosser, C., & Mellon, J. (2018). The twilight of the polls? A review of trends in polling accuracy and the causes of polling misses. Government and Opposition, 53(4), 757–790. Rothschild, D., & Malhotra, N. (2014). Are public opinion polls self-fulfilling prophecies? Research & Politics, 1(2), 1–10. Scammell, M., & Semetko, H. A. (2008). Election news coverage in the U.K. In J. Strömbäck & L. L. Kaid (Eds.), The handbook of election news coverage around the world. Routledge. Searles, K., Humphries Ginn, M., & Nickens, J. (2016). For whom the poll airs: Comparing poll results to television poll coverage. Public Opinion Quarterly, 80(4), 943–963. Searles, K., Smith, G., & Sui, M. (2018). Partisan media, electoral predictions, and wishful thinking. Public Opinion Quarterly, 82(1), 302–324. Splichal, S. (2012). Public opinion and opinion polling: Contradictions and controversies. In Opinion polls and the media. Palgrave Macmillan. Tryggvason, P. O., & Strömbäck, J. (2018). Fact or fiction? Investigating the quality of opinion poll coverage and its antecedents. Journalism Studies, 19(14), 2148–2167. van der Meer, T. W. G., Hakhverdian, A., & Aaldering, L. (2016). Off the fence, onto the bandwagon? A large-scale survey experiment on effect of real-life poll outcomes on subsequent vote intentions. International Journal of Public Opinion Research, 28(1), 46–72. Weaver, D., & Kim, S. T. (2002). Quality in public opinion poll reports: Issue salience, knowledge, and conformity to AAPOR/WAPOR standards. International Journal of Public Opinion Research, 14(2), 202–212. Westwood, S. J., Messing, S., & Lelkes, Y. (2020). Projecting confidence: How the probabilistic horse race confuses and demobilizes the public. Journal of Politics, 82(4), 1530–1544.

CHAPTER 2

The Four Steps of Poll Coverage: Creating, Selecting, Reporting and Responding

In this chapter, we outline the journey of opinion polls, from the creation of a poll to the point where the broader audience consumes the polling information packaged in the news media coverage. While most studies interested in opinion polls often look at opinion polls at a single point in time, we pay extra attention to four conceptually distinct, but interrelated steps in the process of poll coverage: creation, selection, reporting and responding. We do this in order to underscore how a small change in the first step can grow in the following steps and lead to large discrepancies between what all opinion polls show and what the news coverage of opinion polls suggest. The chapter proceeds as follows. First, we introduce an illustrative example of the core phenomenon we seek out to understand in a systematic and comprehensive manner in this book. Second, we outline the four steps—or activities—in turn. Third, we connect and compare each of the four steps and posit the theoretical argument. Fourth, we conclude on the main implication of the theoretical insights before we turn to the empirical findings in the next chapters.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. G. Larsen and Z. Fazekas, Reporting Public Opinion, https://doi.org/10.1007/978-3-030-75350-4_2

13

14

E. G. LARSEN AND Z. FAZEKAS

2.1

Who Cares About Which Polls?

To illustrate the various steps opinion polls have to travel through, consider two opinion polls conducted by two different polling companies on the same day. Will these two opinion polls get the same attention by journalists and be presented in the same way, assuming they both interviewed a sample of 1,000 nationally representative people? The key argument presented in this chapter is that these two polls will not necessarily receive the same attention, despite the fact that they are both high-quality opinion polls following all state-of-the-art methodological principles. The reason for this is that opinion polls in most cases show slightly diverging numbers due to random noise, leading to the relevance of the margin of error. One poll might give a party 29% of the support in the public, another poll might give the same party 32% of the support, however, these two opinion polls are not different from each other once we take the margin of error into account. If the previous level of support for the party in the most recent poll was 28%, then we expect that the opinion poll showing 32% will be of greater interest to journalists, as the four percentage points (from 28 to 32%) are more interesting than a single percentage point change from 28 to 29%. Not only is the poll showing this change more interesting, despite it not being statistically different (at a 95% confidence level), but it is also a change that journalists will put more emphasis on in how they frame the news article, and at the end of the day, it will be information that will attract a stronger response from the public. To show a different real-world example, we can look at a political opinion poll carried out in Denmark by the polling company Megafon (in partnership with Politiken and TV 2, two large media outlets) on January 28 in 2016. Most notably, this poll indicates that the Social Democrats, the largest party in the Danish parliament, has registered a loss of seven percentage points, from 26.3 to 19.3% (a quarter of its total support). The previous polls by the same company (and others) put the Social Democrats at around 25%. In the following days, this poll has been included in several news stories across multiple media outlets. The news story even made it to the international media coverage, such as the Norwegian news. To be sure, the covering articles were not aimed at discussing the methodological facets or broader comparisons of the said poll. On the contrary, plenty of reactions by the parties’ members or members of the competing parties were published as responses to the

2

THE FOUR STEPS OF POLL COVERAGE …

15

poll. However, when contextualised and compared to the polls conducted in the months before and after, we can see this poll is an outlier. In other words, the opinion poll was a statistical fluke that was simply not reflecting the true level of support for the party. In comparison, several polls in the same period indicated smaller changes that were in line with general trends. Importantly, all of these opinion polls not showing a major change only registered a fraction of covering articles or reactions by politicians, pundits, or the general public. In the following months, the subsequent opinion polls from Megafon were closer to the 25%. However, this stability in the Megafon polls back at 25% did not attract similar attention from the journalists. For the public, the prominent opinion poll was salient and gave strong reasons to believe that the party in question indeed had lost a quarter of its support in the electorate. The example is a literal outlier and is not representative for what most polls look like. That being said, the basic mechanism at play here, and in particular how the media report on opinion polls, and how much, is not conditional upon extreme changes. As we argue in this chapter, even small and insignificant changes can significantly change whether an opinion poll is found newsworthy by journalists or not. The small and insignificant changes, however, can become large via a snowball effect due to the various steps that opinion polls need to travel through before they become part of the public conversation. That is, once an opinion poll is found newsworthy because of the change it shows, the content of the reporting will focus on said change and the public will be responsive to such reporting.

2.2

From Questions to Opinions: Four Activities

It usually only takes a few days from when an opinion poll is conducted, i.e. when a representative sample of the population answers questions related to their opinion in a telephone interview or an internet-based panel, to the point when the public is being presented with the results of the poll in the news coverage. During those days, however, several decisions are taken by the suppliers of opinion polls (pollsters, journalists) and the consumers of opinion polls (politicians and users of mass media). These decisions have paramount implications on how people perceive and understand public opinion. While in some cases many or all of these decisions can happen at a very fast pace, we find it important to zoom in on each of the activities and understand their defining characteristics with

16

E. G. LARSEN AND Z. FAZEKAS

regard to how opinion polls are being covered. In doing this, we are able to not only position the existing studies within each of these activities but also emphasise our unique contribution to how we understand the coverage of opinion polls in contemporary democracies. The activities involve different challenges and potential biases in how we understand the coverage of public opinion. We refer to the decisions as activities and identify four types, mapping them to a temporal dimension. This helps us understand and illustrate how different biases can accumulate or cancel out, and in particular how different decisions in the various stages matter for the availability of opinion polls in the information environment for the public. Beyond the general framework, in this chapter we devote more attention to the first step, the creation of opinion polls, to kick off our process and then present our detailed discussion of the next three steps in subsequent chapters. However, we do briefly outline those activities here, as it will be necessary for our theoretical skeleton of the book (for more on the theoretical framework, see also Larsen and Fazekas [2020]). 2.2.1

Creating a Poll

The first activity is related to creating a poll, which is essentially the decision to conduct a survey and to collect the data. There are two important features in the creation of a poll. First, the question is whether an opinion poll should be conducted at all. Second, how an opinion poll should be conducted. For the question of whether a poll should be commissioned, there is often a partner organisation involved in decisions related to when polls are conducted. Many news organisations and outlets maintain ongoing contracts with polling companies, spanning across various data collection topics, but often including political polls. While not all polls created by survey companies are exclusive to their partners, many of them are provided initially only to the partner organisations, conferring reporting advantages. That is, media outlets have contacts to specific polling companies that will matter for what opinion polls we see—and when. To illustrate some empirical insights, in our Danish case, we will be looking at nine news outlets in different forms and five of these had ongoing partnerships with polling institutes. While these relationships are quite stable, there are also a few examples of news outlets switching polling companies. In the United Kingdom, we will zoom in on the Guardian and the polling companies conducting opinion polls for them.

2

THE FOUR STEPS OF POLL COVERAGE …

17

Overall, however, we find the question of when a poll should be conducted important in theory but less so in practice. This is due to the fact that, in contemporary democracies, polls are created on a regular basis and thus the importance of the creation of a poll diminishes as the number of polls being conducted increases. This is especially the case if we also consider political campaign periods leading up to elections. For that reason, we will not devote much attention to this specific question and discuss the few relevant considerations in this chapter. However, if opinion polls were rarely conducted and more likely to be conducted when certain events took place, this would lead to an event bias. To demonstrate an event bias is methodologically difficult as it requires assumptions about what public opinion would look like at times where we have no or less data. We will return to considerations about the timing of polls at the selection stage, where we also account for the time since the last available poll conducted in building a holistic picture of the selection and then coverage bias. As opinion polls are more likely to be conducted as election day gets closer, we generally have more reliable information on public opinion in times of election campaigns. To illustrate this, we first look at the three recent parliamentary election years in Denmark, 2011, 2015 and 2019. Figure 2.1 shows the weekly number of vote intention polls within each of the three years. As we can see, there is a significant increase in the number of polls as we approach election days in Denmark, i.e. September

Fig. 2.1 Number of opinion polls, Denmark, 2011, 2015 and 2019

18

E. G. LARSEN AND Z. FAZEKAS

15th in 2011 (week 37), June 18th in 2015 (week 25) and June 5th in 2019 (week 23). Interestingly, the longer election campaign period in 2019 led to fewer weekly opinion polls. Furthermore, this trend is not related to any seasonality within a year, and it is independent of the calendar timing of the election. Figure 2.2 shows a similar empirical trend in the United Kingdom, where we extend the display not only to election years—2001, 2005, 2010, 2015, 2017, and 2019—but to the entire last 20 years. As seen, there are minor spikes around other political relevant events, but the election dates are easily visible. In addition, we also see a sharp drop after the elections in how many polls are being conducted. In other words, in the aftermath of an election, we see very few opinion polls. A final remark on how the context can influence the number of polls fielded in the United Kingdom is related to the 2010–2015 period: while the government had a sizeable majority (78 seats), it was a coalition government with political polling used frequently throughout the period in order to get repeated snapshots regarding support numbers, specifically relevant for the coalition members. Importantly, these trends are not specific to the political or media system of Denmark and the United Kingdom. On the contrary, this is similar to what we see in other countries: as we get closer to election day, we are more interested in who is more likely to win and lose (Banducci & Hanretty, 2014). However, while there are definitely more polls released

Fig. 2.2 Number of opinion polls, United Kingdom, 2000–2019

2

THE FOUR STEPS OF POLL COVERAGE …

19

at certain points, the activity of creating a poll is primarily interesting from the perspective of how a poll is created (and not when). In particular, this is of relevance when we regard a particular poll to be good or bad. The good opinion polls are characterised by providing a valid and reliable snapshot of what the public believes. The bad opinion polls, on the other hand, can be bad for a multitude of reasons. This is in line with the Anna Karenina principle, which, when applied to opinion polling, states that several factors that go into creating an opinion poll can make it bad. In their report on polls and the 2016 presidential election in the United States, the American Association for Public Opinion Research (AAPOR) pointed out that there is variation in the quality of state-level polls, such as who conduct the polls and when (AAPOR, 2017). A good poll aims to mitigate two types of error. First, to reduce so-called random variations in sampling error (by having a large representative sample of the population). Second, and more importantly, it aims to reduce systematic sources of error. There are various biases that can be problematic here—including, but definitely not limited to, method bias and response bias. The most relevant point to capture here is that—even the best polls, or especially the best polls—will show random changes over time. As we will discuss in the next chapters, polls will show changes over time, even when no such change might be happening in the public, and such changes can have significant implications for how opinion polls are covered. Why do we need to care about how opinion polls are created? Because they can significantly shape the results we get in these polls. In 2016, prior to the presidential election in the United States, The New York Times gave five pollsters the same data consisting of 867 poll responses from voters in Florida (see Cohn, 2016). The pollsters used the same data but reached different conclusions. One pollster gave Hillary Clinton 42% of the votes and Donald Trump 38% (a win for Hillary Clinton by four percentage points), whereas another pollster gave Hillary Clinton 40% of the votes and Donald Trump 41% (a win for Donald Trump by one percentage point). In other words, polls are not objective numbers unrelated to the choices made by pollsters. For that reason, we pay additional attention throughout the book to some of these methodological details, including the margin of error.

20

E. G. LARSEN AND Z. FAZEKAS

2.2.2

Selecting, Reporting, and Reacting

The second activity is related to selecting an opinion poll. Once a poll is created, it is not guaranteed that an editor or journalist will deem it relevant. The selection stage can be first understood as a simple binary decision about whether a poll should be selected for reporting or not. The selection of a poll is like a pregnancy in so far that a poll cannot be half selected. However, as we argue in the next chapter, a lot of considerations go into this activity and there are multiple ways in which an opinion poll can be selected. For example, a key consideration can be the amount of space that should be devoted to a poll. Furthermore, while the initial selection of a poll for reporting is crucial, whether the poll is being selected for coverage among multiple media outlets makes the question of selection much more complex. We dedicate Chapter 3 to this stage. The third activity is related to reporting a poll. Once an opinion poll is found relevant for coverage, the media can report on opinion polls in various ways, including the framing of a poll (e.g., whether the poll shows change or no change) and the level of detail (e.g., the methodological information included in the reporting). These are the aspects that often receive attention in the literature on the quality of the reporting (Andersen, 2000; Bhatti & Pedersen, 2016; Paletz et al., 1980; Weaver & Kim, 2002), such as the extent to which important methodological pieces of information are present in the coverage. This is the stage we focus on in Chapter 4. The fourth activity is related to responding to opinion polls. This is the final stage in our framework and relates to how key actors respond to polls. There are different groups that are of interest to study here, such as the public and politicians. For example, the public can be motivated to change their vote choice based on the coverage of polls or they can decide to share the poll in question within their social network, and politicians can react to and comment on polls. This is the final stage within our framework where the opinion poll is back to the public. We focus on this stage in Chapter 5.

2.3

A Common Framework for All Steps

For each of these activities, we are looking at a different set of questions, actors and potential biases, summarised in Table 2.1. When polls are created, this is often done in a collaboration by polling firms and

2

THE FOUR STEPS OF POLL COVERAGE …

21

Table 2.1 The four activities of opinion polls coverage Activity

1: Creating

2: Selecting

3: Reporting

4: Responding

Actor

Polling firms, media outlets Should an opinion poll be commissioned? Event bias, method bias

Editors, journalists Should an opinion poll be covered? Selection bias

Journalists

Audience (e.g. public) How do people respond to opinion polls? Motivational bias

Key question

Potential bias

How should an opinion poll be covered? Reporting bias

media outlets. Here, the key question is whether an opinion poll should be commissioned or not. While this activity is far from the attention of the public, considerations about whether there will be an audience for the poll are important for the question of whether a poll should be commissioned or not. Once a poll is being conducted, the process of selecting and reporting the poll is a job for editors and journalists, where the latter have more to say as it becomes more about the specifics of the reporting. Questions such as whether to select the opinion poll and how to report it—will often be intertwined, as the potential to report an opinion poll in a certain way will shape the considerations on whether to report on it in the first case. There are two noteworthy complexities here. First, because polling firms are often involved in the actual design of the questions and design of the survey, they also provide interpretations of what the polls show. Such statements can often travel all the way to the final poll reporting (Pétry & Bastien, 2013). This is analogous to working with a close to unaltered press release or report, sometimes only provided to the partner media outlet. Second, although it is rare, polling firms sometimes decide to conduct opinion polls without partnering up with a media outlet. Finally, when the poll is covered in the media, the last question remains how people respond to it. When journalists are covering the polls, they might involve other actors by asking for reactions by politicians and pundits for their reaction to the poll. While the key actor in the reporting is the journalist, we will focus on whether other actors in the reporting react to the poll. In our framework, the primary point is that we should not understand these activities in isolation from each other, but we should rather link different activities together. In doing so, we can demonstrate how

22

E. G. LARSEN AND Z. FAZEKAS

different activities are connected to each other in systematic ways with implications for how biases shape the coverage at different stages and travel from one stage to the next. For example, considerations about how the audience will respond to a poll can matter for not only whether a poll is deemed newsworthy but also how exactly such a poll is going to be covered. In other words, the answers to the questions illustrated in Table 2.1 are conditional upon each other, even if the actors involved are different or if there is a temporal distance between them. We started our chapter with two example polls. However, now we can extend this and systematically illustrate how some of these biases can have cumulative effects. In order to do this, we zoom in on one element that characterises the relationship between two subsequent polls: the amount of change registered by one or more parties measured in these polls. As we show in the next chapter, change or volatility can mean many different things, but for the present purposes, we use this term in a very general way, as some difference between two points in time regarding the support a party receives. Figure 2.3 shows how 50 opinion polls can be affected by the four activities. First, for the creation of opinion polls, we see that 70% of the opinion polls show no changes, 20% show medium changes and 10% show big changes. In practice, these numbers will differ according to the frequency in which opinion polls are conducted. During election campaigns, for example, most of the opinion polls will show no changes from one day to the other. Second, in the selection stage, i.e. where journalists and editors decide whether they want to cover an opinion poll, we see how a selection bias will lead to a disproportionate number of opinion polls showing changes, even if most opinion polls show no changes. In the illustration, we now have 40% of the opinion polls showing no changes (as most of the polls showing no changes in the first place were not selected),

Fig. 2.3 Theoretical illustration of cumulative effects

2

THE FOUR STEPS OF POLL COVERAGE …

23

40% showing medium changes and 20% showing big changes. Noteworthy, these numbers illustrate the selection dynamics, and in particular the preference for change, and should not be seen as numeric predictions. Third, in the reporting, journalists can turn marginal changes into large changes, for example by ignoring the margin of error or comparing two different outlier polls. Now, we see that only 20% of the polls show no change. Fourth, the public will find opinion polls showing greater changes more interesting and read and share such opinion polls accordingly, and therefore be exposed to opinion polls showing big changes. In the final scenario, we therefore see a majority of the opinion polls showing big changes—even when a majority of the opinion polls shows no to little change. Overall, as the illustration shows, and the subsequent chapters demonstrate empirically, there are distinct ways in which opinion polls can go from being boring snapshots showing no to little change to be present in the news diet of the public showing great and surprising changes. This has substantial implications for our understanding of opinion polls in contemporary democracies. As Traugott (2008, p. 239) argues: “Consumers of poll results rely upon journalists to select and report poll results and public opinion data in a way that provides accurate information and a context for interpretation”. If the context for interpretation is one shaped by certain biases, this will alter how exactly the public and politicians alike will respond to opinion polls and perceive the nature of public opinion itself. Hence, to understand how opinion polls are reported in the news media, we take an important step forward in understanding the processes polls travel through from their initial collection to their final coverage available to the public. Specifically, we focus on different concepts that can have distinct implications for how exactly opinion polls are turned into news stories. While polls are often considered newsworthy in nature, we set out to explore systematic patterns in how journalists turn these polls into a political horse race. Our framework is similar to the functional model of news factors in news selection by Staab (1990), where there is a temporal distinction between “reality”, “journalist’s decisions” and “media coverage”. The idea of election coverage as a horse race is not new and can be traced back to at least 1888 when the Boston Journal wrote that a “dark horse” was unlikely to emerge from the campaign (Broh, 1980: 526). Brettschneider (2008) describes the term horse race coverage in the following manner: “This term is an application of a metaphor derived

24

E. G. LARSEN AND Z. FAZEKAS

from sports. ‘Who’s ahead?’ ‘Who’s running behind?’ ‘Who made gains? Who suffered losses?’” (p. 483). This is not necessarily bad or good in and by itself. As further described by Brettschneider (2008): “In the supporters’ opinions, the usage of the ‘horse-race metaphor’ helps to build public interest for a topic which, otherwise, seems to be incomprehensible, distant, and boring” (p. 483). For critics, though, it removes the focus from the substance of the political issues that should be the focus of campaigns. The public has a strong preference for horse race journalism, creating demand-side expectations for editors and journalists. On average, the public has a strong preference for election news that is about horse race and strategy (Iyengar et al., 2004; Matthews et al., 2012). In fact, those with a greater interest in politics prefer news with a strategic frame—even when they say they do not prefer such news (Trussler & Soroka, 2014). In other words, when politically interested citizens might say that they prefer actual political substance rather than being presented with a horse race coverage, their actual news consumption will not necessarily reflect their attitudes. Accordingly, for each of the activities, we are in theory looking at different biases, but we expect that all of the biases are children of the same preference for change. Journalists’ preference for change in the selection stage (a selection bias) will be the same preference the public has for change when they decide to engage with an opinion poll (a motivational bias). In other words, our framework is highlighting supply and demand dynamics, where journalists supply changes in opinion polls that is in high demand in the public.

2.4

Concluding Remarks

When it comes to the focus on the horse race when covering opinion polls, such journalistic preferences can lead to misinterpretations of the opinion polls (Patterson, 2005). Specifically, journalists might focus more on the exact numbers and conclude that any differences are significant rather than taking the survey error into account. As we show in the next chapters, this is indeed the case. Even small changes, that are statistically insignificant, will be relevant for the reporting of opinion polls. How can polls show very little change from one poll to the next, while the coverage on polls is dominated by stories about change? This is one of the questions we pay close attention to throughout the book. The findings we present in the upcoming chapters show that biases in whether

2

THE FOUR STEPS OF POLL COVERAGE …

25

or not polls are covered are not mitigated in the actual reporting. On the contrary, by linking the different stages together in a unified framework, we can track the opinion polls and show how biases are amplified in the coverage.

References AAPOR. 2017. An evaluation of 2016 election polls in the U.S. https://www. aapor.org/Education-Resources/Reports/An-Evaluation-of-2016-ElectionPolls-in-the-U-S.aspx. Accessed March 13, 2021. Andersen, R. (2000). Reporting public opinion polls: The media and the 1997 Canadian election. International Journal of Public Opinion Research, 12(3), 285–298. Banducci, S., & Hanretty, C. (2014). Comparative determinants of horse-race coverage. European Political Science Review, 6(4), 621–640. Bhatti, Y., & Pedersen, R. T. (2016). News reporting of opinion polls: Journalism and statistical noise. International Journal of Public Opinion Research, 28(1), 129–141. Brettschneider, F. 2008. The news media’s use of opinion polls. In W. Donsbach & M. W. Traugott (Eds.), The SAGE Handbook of public opinion research. Sage. Broh, C. A. (1980). Horse-race journalism: Reporting the polls in the 1976 presidential election. Public Opinion Quarterly, 44(4), 514–529. Cohn, N. 2016. We gave four good pollsters the same raw data. They had four different results. The New York Times. https://www.nytimes.com/intera ctive/2016/09/20/upshot/the-error-the-polling-world-rarely-talks-about. html. Accessed on January 31, 2021. Iyengar, S., Norpoth, H., & Hahn, K. S. (2004). Consumer demand for election news: The horserace sells. Journal of Politics, 66(1), 157–175. Larsen, E. G., & Fazekas, Z. (2020). Transforming stability into change: How the media select and report opinion polls. The International Journal of Press/politics, 25(1), 115–134. Matthews, J. S., Pickup, M., & Cutler, F. (2012). The mediated horserace: Campaign polls and poll reporting. Canadian Journal of Political Science, 45(2), 261–287. Paletz, D. L., Short, J. Y., Baker, H., Cookman Campbell, B., Cooper, R. J., & Oeslander, R. M. (1980). Polls in the media: Content, credibility, and consequences. Public Opinion Quarterly, 44(4), 495–513. Patterson, T. E. (2005). Of polls, mountains: US journalists and their use of election surveys. Public Opinion Quarterly, 69(5), 716–724.

26

E. G. LARSEN AND Z. FAZEKAS

Pétry, F., & Bastien, F. (2013). Follow the pollsters: Inaccuracies in media coverage of the horse-race during the 2008 Canadian election. Canadian Journal of Political Science, 46(1), 1–26. Staab, J. F. (1990). The role of news factors in news selection: A theoretical reconsideration. European Journal of Communication, 5(4), 423–443. Traugott, M. W. (2008). The uses and misuses of polls. In The SAGE handbook of public opinion research. Sage. Trussler, M., & Soroka, S. N. (2014). Consumer demand for cynical and negative news frames. The International Journal of Press/politics, 19(3), 360–379. Weaver, D., & Kim, S. T. (2002). Quality in public opinion poll reports: Issue salience, knowledge, and conformity to AAPOR/WAPOR standards. International Journal of Public Opinion Research, 14(2), 202–212.

CHAPTER 3

Explaining How Media Outlets Select Opinion Polls: The Role of Change

In this chapter, we examine the selection of opinion polls by the media. Specifically, we ask why some opinion polls are selected by journalists whereas others are not. In simple terms, whenever we speak about selection, we refer to the act when a conducted opinion poll gets picked up by a news outlet for at least one newspaper article talking mainly about said poll. This is an important first step to understand if we are to shed light on why we see discrepancies between what all opinion polls show and what the opinion polls in the media show. As introduced in Table 2.1 in Chapter 2, the selection stage contains activities where the deciding actors will be editors and/or journalists, and we are interested in understanding this activity because it can show the extent to which there is a selection bias in the news coverage. The two main questions we answer here revolve around (1) why some but not all opinion polls are given attention, and (2) how much attention is given to polls. While these two questions are interrelated, we show relevant theoretical and empirical nuances between the two. Later on, in Chapter 5, we show that just because two opinion polls get published, it does not directly mean that they will attract the same level of attention, especially not by the public. We argue that one core feature of opinion polls determines whether they are selected or not, namely how much change they show (Larsen & Fazekas, 2020). Specifically, the more change an opinion poll shows, the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 E. G. Larsen and Z. Fazekas, Reporting Public Opinion, https://doi.org/10.1007/978-3-030-75350-4_3

27

28

E. G. LARSEN AND Z. FAZEKAS

greater the likelihood that it will be selected. Thus, in this chapter, we demonstrate empirically that the selection bias happens along the lines of change and that this activity is a defining moment driven by the consideration of what narratives can be told. As we show, despite the potential availability of statistical information that shows how such change is not a “real” change—i.e. inside the margin of error, this does not hinder the development of a selection bias and this bias travels across multiple news outlets, even beyond the media outlet commissioning the opinion poll. Overall, this suggests that selection biases driven by the attractive nature of a change narrative will be systematic and overarching. The chapter is structured as follows. First, we discuss considerations on how to understand opinion polls as news and in particular what can make some events more likely to be selected by the mass media. Second, we introduce the context of opinion polls in Denmark and the United Kingdom that serves as the empirical context for the analyses. Third, we focus on the different notions of change and in particular our operationalisation of change in opinion polls. Fourth, we outline the media data we are going to link to the opinion polls. Fifth, we empirically test whether changes in opinion polls are significant drivers of the newsworthiness of opinion polls, namely how change matters for the selection of polls. Sixth, we zoom in on the nature of the change in opinion polls and analyse whether the changes are outside the margin of error and, if so, whether this is relevant for how media outlets select opinion polls for the news coverage. Seventh, we provide an overview of the findings in the chapter.

3.1

Opinion Polls as News

Not all opinion polls will become news and to understand why, we need to reflect upon what we mean by news. News is not simply news and our simple, yet crucial starting point is related to what news is, rather than what a poll is. Whether something is newsworthy is conditional upon the frequency in which news is published (Galtung & Ruge, 1965). For example, as Tim Harford describes in his book, How to Make the World Add Up, there is a big difference between what counts as financial news in the rolling coverage of Bloomberg TV, the daily newspaper Financial Times and the weekly newspaper The Economist because of the publication frequencies. Similarly, whether an opinion poll is newsworthy will be decided in interaction with the information environment, and the publication deadlines of the newspapers. In a hypothetical scenario where news

3

EXPLAINING HOW MEDIA OUTLETS …

29

was only published once a year, a new opinion poll would most likely be relevant on its own terms. However, with daily newspapers and shorter deadlines, a new poll is not necessarily interesting compared to all other polls that have been reported recently. This is a feature that makes it crucial to understand how opinion polls can differ in their newsworthiness according to how we define news. There are various definitions of what news is. Such definitions normally depict news as recent, interesting and significant events (Kershner, 2005). In general, political journalists have to select the most novel political events and decide how to report these events, often with the imperative to meet a demand for horse race political coverage (Iyengar et al., 2004; Matthews et al., 2012). When considering opinion polls as news, the news is not simply about providing the numbers from a new poll, but to make sure that some numbers add to a narrative that is recent, interesting and significant. The challenge is that most opinion polls, when compared to other recent polls, rarely are interesting or indicate significant changes resulting in substantive interest. In fact, as we show below, most opinion polls show little to no change outside the margin of error over short periods of time. This then is a core challenge for a journalist when faced with a new opinion poll, since there is most likely no glaringly obvious story to tell, once all methodological details and considerations are weighed correctly. How can then journalists select opinion polls in order to meet the demand for horse race political coverage when such opinion polls, due to their prominence, rarely provide evidence for novel changes? Providing news about “no news” will rarely do the trick. In contrast, we can investigate when polls become news and, in the next chapter, how this can transform the nature of the poll itself. An opinion poll will often, even when it shows no change, be transformed into an opinion poll showing changes. The news is change and the evidence behind it will be the poll, rather than the poll being the news itself. Within our framework introduced in the previous chapter, after a poll has been created, it can either go unnoticed or become part of the news coverage as a newsworthy piece of information. What we show is that when polls become news, the newsworthy piece of information is often not in line with what the specific poll is showing, and in general not what several polls show. The selection activity that is whether a poll should go unnoticed or not, is guided by general media selection principles. Thus, there is nothing

30

E. G. LARSEN AND Z. FAZEKAS

specific about the mechanism, but it is possible to assess its extent to a very detailed level given all the information polls provide. Consequently, our focus on polls allows us to quantify these and link them to objective, statistical properties as well. In other words, while the focus is on opinion polls, we expect that the dynamics are relevant for other types of political coverage, and we can rely on general theories on media coverage, and in particular theories about gatekeeping and newsworthiness in providing direct expectations for how and when opinion polls are selected by editors and/or journalists for the coverage. Mass media works as a gatekeeper in relation to which political events should be reported (Clayman & Reisner, 1998; Groeling & Kernell, 1998; Helfer & Aelst, 2016; Padget et al., 2019; Soroka, 2012; Wagner & Gruszczynski, 2018) and not all political events obtain the same level of attention in the media (Greene & Lühiste, 2018; Kostadinova, 2017; Meyer et al., 2020). It is unfair and impractical to expect that the media should report all political events, national and international, small and large. For that reason, it is no surprise that not all opinion polls see the light of day in the black ink of the newspaper. Most studies about opinion polls focus on the media reporting, so they are overwhelmingly interested in the effects of the reporting or how good the reporting is when compared to a set of standards. However, key to the study of the coverage of opinion polls is defining the population of opinion polls that is available to be selected. There are specific limitations to what we can say about opinion polls in the media by only studying the opinion polls that were selected: without a population of opinion polls, we are not able to make inferences related to the coverage of opinion polls vis-a-vis the absence of coverage (Groeling, 2013; Hug, 2003). This is a core methodological challenge to any study of gatekeeping in the media. In this chapter, we introduce such populations of both unreported and reported opinion polls, which then enables us to track which opinion polls end up in the political coverage. In the day-to-day business of selecting opinion polls to cover, journalists and editors will have to make the important decision balancing various considerations about which events to cover and how. Previous studies have examined how journalists perceive the importance of different events (Strömbäck et al., 2012), and among the properties that are most relevant to journalists are those revolving around deviations from similar events. That is, the more an event stands out, the more interesting it will be. The more an event today deviates from a similar event yesterday, the better.

3

EXPLAINING HOW MEDIA OUTLETS …

31

For that reason, the greater the potential for writing about changes related to an event, the more likely it is considered newsworthy (Lamberson & Soroka, 2018). As Soroka et al. (2015, p. 460) write, “[n]ovelty and change are defining features of newsworthiness”. The emphasis on change is not limited to the coverage of polls but also matters for media coverage on other issues such as the economy (Soroka et al., 2015). This is not to say that journalists cannot have other preferences. On the contrary, journalists pay attention to a multitude of event characteristics when considering the potential newsworthiness. However, all else equal, focusing on change or deviation is an easily accessible characteristic of an event that will make any event more novel and remarkable. Although previous studies have demonstrated how certain characteristics make political events more likely to be selected (Andrews & Caren, 2010; McCarthy et al., 1996; Meyer et al., 2020; Myers & Caniglia, 2004; Niven, 2001; Oliver & Maney, 2000), we do not fully understand how the focus on change can guide different stages of the coverage of opinion polls. Banducci and Hanretty (2014) study several of the determinants of opinion poll coverage and find that opinion polls are more likely to be covered as we get closer to election day and in more polarised party systems. This is a fruitful starting point and we can extend these observations by considering not only whether a poll is selected, but how often does this happen or by whom. There are a few studies that have paid attention to the circumstances under which polls are more likely to be selected by the news media (Groeling, 2008; Groeling & Kernell, 1998; Matthews et al., 2012; Searles et al., 2016). However, they share one or more limitations that we seek out to address. First, and most importantly, they only focus on one aspect in the framework we introduced in Chapter 2. Specifically, we are unable to use these studies to inform how a selection bias will also shape the actual reporting of opinion polls and whether such opinion polls are more likely to get shared. Second, in some studies, there is little temporal variation and most of the attention is devoted to a limited time period. In one of our datasets, we will leverage polls spanning decades to ensure that the data is not limited to one campaign context. Third, as for most of the empirical literature on opinion polls, most of what we know about the selection of opinion polls is limited to the United States.

32

E. G. LARSEN AND Z. FAZEKAS

3.2 Studying Opinion Polls in Denmark and the United Kingdom We move outside the context of the United States and focus on Denmark and the United Kingdom (the UK from now on), covering different periods, polls, and outlet-polling company combinations. We work with these two European democracies in a complementary manner: these two cases both come with advantages and disadvantages, and we attempt to offset these and present a consistent and comprehensive picture about poll reporting. The extent of difference in the coverage and content helps us with the generalisation of our findings, confers additional confidence in our main conclusions, and also extends our findings relying on data that is only available in one of the countries. We use the full population of opinion polls in Denmark (N = 487) conducted by eight polling firms on vote intention for eight political parties from 2011 to 2015 (the median poll sample size is 1,047). The eight polling firms are Epinion, Gallup, Greens, Megafon, Rambøll, Voxmeter, Wilke and YouGov. The polling firm Norstat carried out five similar polls in this period. In order to ensure that we use polls that are comparable, i.e. conducted on a regular basis, we do not include these five polls. The period covered begins after the 2011 national election and stops prior to the 2015 national election campaign. There is no archive with all opinion polls and, accordingly, we collected all opinion polls in this period ourselves. To ensure that all polls were collected, especially polls not reported in the media, the dataset was developed in collaboration with media outlets and polling firms. In the aftermath of the Danish general election in 2011, the Social Democrats, the Social Liberal Party and the Socialist People’s Party formed a three-party coalition government with the support of the Red– Green Alliance. In this period, there was a new entrant in Danish politics, The Alternative, but given that this party has much fewer measurements as a new party, we do not include them in the main analysis. In Fig. 3.1 we summarise the party support evolution based on all the polls for the party support numbers that we will track. This figure is based on polling numbers for the vote intention questions, the dots representing individual polls. The line should be interpreted as a weighted average for each party based on the model following Jackman (2005), where the support numbers are more precise than numbers from a single poll, and less biased as polling house effects

3

EXPLAINING HOW MEDIA OUTLETS …

33

Fig. 3.1 Party support evolution based on all polls, Denmark, 2011–2015

are accounted for. Using information regarding each poll’s sample size, through the model we can also estimate uncertainty around these averages, which are depicted by the shaded areas. When no information on the sample size for an opinion poll was provided, we impute a sample size of 1,000 respondents. Unsurprisingly, when we have more polls closer to each other we will also have narrower uncertainty bounds. In the United Kingdom, we analyse a much longer period, from 2000 to 2019, but with an in-depth look at one particular news organisation, the Guardian. More specifically, we are interested in how a news outlet selects and later on reports polls carried out for them. However, before zooming in, we describe the general context relying on all polls in this period. This data was generously provided by Professor Will Jennings. The opportunity to cover a longer period of time for one particular news organisation allows for more contextual heterogeneity, such as changes in government composition, changes in government majority, and the emergence, consolidation, and fall of a new party, United Kingdom Independence Party (UKIP). However, it also comes with a lot of polling related heterogeneity in terms of quality and availability. This is less of an issue if we want to carry out the same first step of summarising party support changes, which we do in Fig. 3.2 based on 3,733 opinion polls (median poll sample size of 1,745) from 29 companies. We focus on five parties to assure reasonable coverage and frequency of polling: Labour,

34

E. G. LARSEN AND Z. FAZEKAS

Fig. 3.2 Party support evolution based on all polls, United Kingdom, 2000– 2019

Conservatives, Liberal Democrats, Greens, and UKIP. As with Fig. 3.1 for Denmark, dots represent individual polls whereas lines (with uncertainty) are based on the pooling the polls model introduced by Jackman (2005). The takeaway message from these plots is that while there are fluctuations when zooming in on shorter periods, or day-to-day affairs, we see quite some stability. This confirms that a single new poll will rarely stand out from what other polls conducted within the same weeks will show. In both cases, we see mostly long-term changes, or more precisely changes that need quite some time till a consolidated party support level is reached. There is much more variation in the United Kingdom, given the length of the period covered, but the figure in itself compresses a lot of information. The largest readjustments happen post-election, and yet again, day-to-day changes or even month-to-month changes are small, supporting the idea of stability within a period that overlaps with what we will call the news reporting cycle. Furthermore, the uncertainty of the comparisons we can make should be considered in at least two ways and it is essential to further emphasise the relative stability: within-party comparisons between measurements and between-party comparisons. Day-to-day or even month-to-month changes for a party in their support will come with overlapping confidence intervals or even more, the confidence intervals of the more recent

3

EXPLAINING HOW MEDIA OUTLETS …

35

poll estimates, or polling averages will bracket the previous poll’s estimate. That is, when we look at one party over short periods of time, the support for the party will be stable when we take the margin of error into account. Second, if we look at the Danish case with the Danish People’s Party (Dansk Folkeparti) and the Social Democrats (Socialdemokraterne), when they overtake each other throughout this period is not necessarily just one day or one poll: we see them first being close and with overlapping uncertainty estimates and then steadily changing places. In sum, there are interesting trends to observe in the development of the support for the various parties of interest. However, such trends rarely unfold over weeks and are best conceptualised over longer periods of time. Accordingly, the trends are much better illustrated in historic overviews rather than in day-to-day journalism of the most recent developments. For our analysis throughout the book, we use all the data introduced for Denmark. For the United Kingdom, we focus on one outlet and all the polls commissioned by this outlet or generated as a result of a partnership. This assures comparability and a more granular look at our mechanisms within one newspaper. For the description of general party support trends, we rely on all information, since that information is available for everyone working in politics or political news media. This has already been displayed in Fig. 3.2. However, for the detailed analysis of what is selected and covered by the Guardian, we look at polls that were commissioned by this newspaper. In doing this, we also include articles from The Observer and the online version of the Guardian, which we will just refer to as the Guardian as well. This results in polls from three different polling companies: ICM, MORI and Opinium. Overall, the analysis in the two countries supplement each other well. In Denmark, we have variation in the media outlets over a few years. In the United Kingdom, we have variation over a longer time span in one outlet. As we have shown in the previous chapter, we do see a stronger appetite for polls in election years and campaign weeks, which then generates a plethora of polls, increasing the potential journalistic supply for political information. Since we want to establish the general change driven reporting mechanism in a context where not everything is about elections, our main interest is in non-campaign periods. Furthermore, as with the number of polls, we also see essentially constant, high levels of poll reporting when the elections are approaching. Although the availability of polls is larger and the demand might be stronger, there is still competition for limited news space, but in general political and electoral news

36

E. G. LARSEN AND Z. FAZEKAS

are more attractive thus expanding the news dedicated to such topics. In other words, the space normally allocated to political content increases in the days up to election day, and to ensure that our conclusions are not driven by such changes in the coverage with a lot of opinion polls, we primarily look at the selection of opinion polls outside the context of campaigns. Consequently, we exclude the last three weeks before each national election (not applicable in Denmark as no national campaign period is included in the data) and also the first poll after the elections, since these are often readjusted snapshots of the elections, rather than any meaningfully comparable survey measures with something prior to the elections, i.e. the last poll. Overall, our final poll sample in the United Kingdom will be composed of 392 polls.

3.3 Operationalisations of Change in Opinion Polls We rely on several notions of change and formulate theoretical expectations for how they can matter for whether an opinion poll is being selected or not. Thus, we formalise some of our broader discussion about stability and change, followed by the categorisation of change operationalisation and their potential implications. There are several ways in which journalists can focus on change when deciding whether they will dedicate a news story to a new poll. We concentrate on specific choice determinants that are themselves related to some other polling information. These notions of change can increase in complexity along two dimensions. First, in the number of polls that are being used (poll complexity). Second, in the number of parties that are being covered (party complexity). This categorisation means that we are analysing at least two opinion polls as there is no change within a single poll and we base our comparisons on one or multiple values from polls rather than other sources, a point to which we will return later in the chapter. This enables us to focus on change in different ways and examine empirically what is most in line with how media outlets actually select opinion polls. First, the simplest change to focus on is the difference between two polls for one party. In this case, we are low on both the poll complexity dimension and the party complexity dimension. For example, if a party gets 25% of the support in one opinion poll and 30% in the next opinion poll, the change will be 5 percentage points. This information is easy to

3

EXPLAINING HOW MEDIA OUTLETS …

37

compute and thereby has a high availability for journalists, politicians, and the public. Since there are multiple parties in both countries analysed, we take the value of the maximum change between two polls (positive or negative), i.e. the largest change in absolute value as our poll level measurement. In other words, most parties in a poll can show no to little change, but if just one party shows a big change, this will be picked up by this simple change measure. Second, the political coverage of a party is not limited to only two polls but can include a series of opinion polls. This can take place when the support of a party in a new poll is being compared to some (weighted) average of the party’s prior support in a series of polls, which we have introduced earlier in the chapter. The availability will differ according to the number of polls being included or whether there are readily available data for quick comparisons, such as services publishing rolling averages in the polls. However, even in such cases, while the journalistic availability could still be reasonably high, the potential for a simple change narrative is reduced by the complexities of relying on trends, averages, and long-term developments of party support numbers. Our exact measurement is the maximum absolute deviation between the polling numbers and the estimated weighted support for the day before the poll was conducted. Through this step, we assure that our benchmark (weighted party support average) is not influenced by the actual poll we are analysing. As before, we take each party from the poll, calculate the deviation from the weighted average of the day before, and record the highest absolute deviation as our poll level measure. While the potential weighted averages available to journalists and editors might differ from our measure, alternative averages also tap into the same general party trends in the polls, which is of primary interest with this measure of change. Third, in most political systems, we can express change among multiple parties rather than just looking at the change for one party. The focus here is not on how the support for a single party changes but the aggregate level of change between two polls for different parties. The change reference points will follow the same patterns as before, looking at a previous poll or some mixture of richer polling information. We summarise the total change between two polls with a volatility index for all or some parties. Such volatility can be measured by the Pedersen index (Pedersen, 1979). The measure—theoretically ranging from 0 (no change at all) to 100 (all previous parties that received support have no support)—provides

38

E. G. LARSEN AND Z. FAZEKAS

a direct measure of change in the political competition in any multiparty system. It is calculated as the sum of gains or losses in absolute terms across all parties and divided by two. Substantively, this is often the focus if the coverage is interested in looking at the evolution of support for multiple parties in case of potential coalition formation, government survival, ideological leanings, or political system stability in general. In addition, several and overall larger changes in a poll increase the number of possible narratives to report and therefore increase the likelihood of media outlets selecting such a poll. Fourth, and the most advanced number to compute and interpret, is one that relies on data from multiple polls and is about multiple parties. In essence, the options here are quite varied. We can look at whether the change we see in a new poll is different from the volatility we have previously seen in the political coverage based on an average volatility, but we can also calculate the volatility from the new poll in comparison to the (weighted) averages of all-party support at the last time point. As apparent, this combination, whichever form it takes, requires quite some effort for gathering the input data, various calculations, and most importantly communication to a broader audience. We even acknowledge that it is challenging to convey this change measure on these pages in simple terms. For both consistency and interpretability within our framework, we calculate the same Pedersen volatility index, but it will be between the estimated (weighted) party support averages on the day before the poll and the polling numbers themselves. Our key point here is that there are different notions of change and they differ in their availability for journalists, which reflects how easy it is to identify whether there are any changes when looking at a new opinion poll. We summarise the typology in Table 3.1 together with information Table 3.1 Operationalisations of change and availability

Two polls

Multiple polls

One party

Difference between two polls (high availability)

Multiple parties

Volatility between two polls (moderate availability)

Difference from party average (moderate availability) Volatility from average differences (low availability)

3

EXPLAINING HOW MEDIA OUTLETS …

39

on whether the change measure has low, moderate, or high availability. Our main expectation is that the change measure with high availability will be a more important driver of selection compared to change measures with low availability. There are some remaining aspects to clarify. First, these notions of change use only changes derived from comparisons to other poll-based information. In principle, there are various pieces of information journalists can rely on, often related to different thresholds and milestones. Most relevant is the election result as important information: is the party doing better or worse than it was on the most recent election day? Similarly, in systems with an electoral threshold, smaller parties can be benchmarked to whether they are clearing the threshold. The comparison can also be within a poll, for example by saying that the level of support for two parties is now identical. The general mechanism of deviation, comparison or change applies in these cases as well and these are cases where journalists look at one party at a time and traditionally benchmark to one fixed value for each party. These comparisons are high in availability, so we would expect them to work similar to changes compared to a single prior poll. Second, we can also think of the newsworthy element in terms of the absolute size of a lead within one poll. While these are important, they are always put in context when presenting and covering the input. Hence, if it contributes to the selection, it will contribute along with the same logic as introduced before, forcing journalists to look for baseline or anchor comparisons, such as historically largest or smallest lead, the lead compared to the previous polls or prior elections. In other words, it is difficult to look at opinion polls without making comparisons of one form or another. Third, the poll-related metrics or the changes in between two polls can be expressed in different ways. We have mentioned examples regarding potential volatility when multiple previous polls are compared. Our operationalisation choices cover a wide variety of possible ways to measure change related metrics. However, we can only assume, especially in the more complex cases that these actually factor into the individual journalist’s or editor’s decision. In the next chapters, we show that change-related considerations occupy an important role in the coverage, which should constitute further reassurance for the expected mechanisms here. When certain events are more likely to be selected, the reporting can

40

E. G. LARSEN AND Z. FAZEKAS

accommodate potential biases, e.g. by emphasising the potential unrepresentative nature of the events, but such biases can also be exaggerated and lead to even more unrepresentative coverage. When reviewing the Danish and British polling numbers, our visual inspection highlighted a picture of stability in the short term. How does this translate into our change measures? For example, we can look at volatility between two polls, which summarises multiple parties, but only two polls, being at a moderate availability point. The theoretical range is from 0 (complete stability) to 100, where this would mean all party support changes hands. Essentially, we are looking at the between poll net change in party support across all parties measured. The median volatility between two subsequent polls carried out by the same polling company was 3.05 in Denmark and 2.5 in the United Kingdom. The median for the most complex changes measure involving volatility and the weighted average prior to the poll’s date was 2.84 in Denmark and 2.56 in the United Kingdom, respectively. Yet again, these numbers indicate overwhelming stability between polls.

3.4 Linking Opinion Polls to News Articles: Leaving Everything to Change? We link the variation in volatility to the news selection appetite. In order to do that, the last piece of our selection model is whether a poll has been selected and ended up as news, our future outcome variable. In Denmark, we collected news articles from nine different newspapers, their webpages and the webpage of two national TV companies. The newspapers are Berlingske, Børsen, B.T., Jyllands-Posten, MetroXpress, Politiken, Ekstra-Bladet, Kristeligt Dagblad and Information, with webpages b.dk/politiko.dk, borsen.dk, bt.dk, jyllands-posten.dk, mx.dk, politiken.dk, ekstrabladet.dk, kristeligt-dagblad.dk and information.dk. The TV companies are the Danish Broadcasting Company (dr.dk) and TV 2 (politik.tv2.dk). Aggregated, the newspapers had a readership of 1,864,000 (5,643,000 total population) on a normal weekday in the second half of 2014. Four of the newspapers did not have any formal arrangement with a polling firm, whereas the other outlets commissioned polls through the firms used in the analyses. One polling firm had a news outlet partner not included directly in our data collection, as they are commissioned by Ritzau, the largest Danish independent news agency. One outlet (Jyllands-Posten) switched firms in 2013, from Rambøll

3

EXPLAINING HOW MEDIA OUTLETS …

41

to Wilke. We account for this partnership agreement in our models. Throughout the analysis, we keep online and offline platforms from the same media outlet as one outlet. Overall, these different arrangements ensure that we have substantial variation in the relationships between the polling firms and media outlets. We collected the news articles using the Danish digital archive from Infomedia. The archive contained all online and print articles in the nationwide coverage. Our article identification procedure was informed by repeated manual checks and reading of the articles, in order to refine our search and filtering process. For each opinion poll we first searched for articles mentioning the polling firm and any party in the articles published within a period of six days after the poll was collected. The time span of six days assures that we focus on the reporting of specific polls as news. One particular reason for not using a period of a week (seven days) is that Voxmeter conducts weekly polls that would potentially provide a misclassification of articles. Next, we searched the articles for mentions of numbers from the poll to which they were assigned to (such as 16,9 for example, both with decimal comma and point). In addition, we searched for the Danish translation of the bi-grams “new poll” (“ny måling ”) and “new opinion poll” (“ny meningsmåling ”). We define an article as being pertinent if either of these two filters returns a positive search result, resulting in a total of 4,147 articles spread across 412 polls. These steps ensure that we do not include old polls and polls covering other topics than vote intention, such as prime minister preference. The process of identifying the British poll news articles shares many features with the Danish case, however, we also specifically adapted our steps to get better face validity, given differences in the reporting style, political context, and the fact that we are working with one (broad) outlet only and three polling companies. As in Denmark, we are only looking for mentions in the week after the poll has been released, to keep the focus on news reporting and we start by the search for the polling company that fielded a poll with the Guardian archives for those 7-day periods where polls were released. To do this, we relied on the Open Platform’s Content API of the Guardian Media Group. This API provides articles from The Observer, the Guardian, and guardian.co.uk from 1999 to today. We accessed this API through the R package GuardianR, at the Guardian API Wrapper (Bastos, 2015).

42

E. G. LARSEN AND Z. FAZEKAS

Next, we make sure that the content (including heading) of the articles simultaneously mention “election*”, “poll*”, and the name of the polling company, thus assuring that we are looking at an electoral poll from that period and polling company. To make sure we did not include any articles where Brexit or other referendums were the main focus, we excluded articles that mentioned referendum and/or Brexit in the headline. This results in 875 newspaper articles across the 312 polls. As apparent, we have very similar search criteria, however, some differences are present because of different reporting style and frequency of polls analysed. For example, the reporting of exact numbers with decimals is less frequent in the United Kingdom and for that reason we did not rely on such a filter in this context. While we aim for comparability, our core concern is to have pertinent mentions regarding polls in each of the two countries analysed.

3.5 How Change Matters for the Selection of Polls We now model the number of mentions a poll gets as a function of the change measures and some additional control variables. For each change operationalisation and country, we fit negative binomial models, as these are most suitable for the distribution of the mention counts. For ease of interpretation and comparability, the coefficient estimates for different change measures will be the effects associated with a change of two standard deviations on the predictors. We control for the time elapsed between the two polls (in days) and the calendar year (included, but not reported in detail). We summarise our results in Tables 3.2 and 3.3. While the statistical models will differ to take the nature of the data into account, the change measures in the regression tables should be interpreted as follows: the greater the coefficient for the specific change measure, the greater the impact of change on the outcome of interest in the table at hand. Importantly, we present visual summaries of the key results as well and, as such, the tables can be skipped by readers not interested in the exact estimates and model summaries. Our results indicate substantial change related selection in both countries and the evidence is clearly in favour of a simpler change-based selection mechanism: high availability matters and it is mostly related to comparisons to a previous single poll, rather than the number of parties. Thus, while change measures summarising one or multiple parties are not

3

EXPLAINING HOW MEDIA OUTLETS …

43

Table 3.2 Selection as a function of change, Denmark

Intercept Change measure Days Year fixed effects? AIC BIC Log Likelihood N

Maximum change, previous poll

Maximum change, moving average

Volatility, previous poll

Volatility, moving average

2.219*** (0.127) 0.500*** (0.108) 0.002 (0.004) Yes 3026.454 3055.641 −1506.227 478

2.155*** (0.127) 0.294* (0.117) 0.009* (0.004) Yes 3087.931 3117.220 −1536.966 485

2.229*** (0.128) 0.494*** (0.110) 0.001 (0.004) Yes 3028.177 3057.365 −1507.089 478

2.165*** (0.127) 0.348** (0.122) 0.010* (0.004) Yes 3086.570 3115.859 −1536.285 485

*** p < 0.001; ** p