371 51 8MB
English Pages 412 Year 2023
Advanced Techniques for Collecting Statistical Data
ADVANCED TECHNIQUES FOR COLLECTING STATISTICAL DATA
Edited by: Olga Moreira
ARCLER
P
r
e
s
s
www.arclerpress.com
Advanced Techniques for Collecting Statistical Data Olga Moreira
Arcler Press 224 Shoreacres Road Burlington, ON L7L 2H2 Canada www.arclerpress.com Email: [email protected]
e-book Edition 2023 ISBN: 978-1-77469-547-0 (e-book) This book contains information obtained from highly regarded resources. Reprinted material sources are indicated. Copyright for individual articles remains with the authors as indicated and published under Creative Commons License. A Wide variety of references are listed. Reasonable efforts have been made to publish reliable data and views articulated in the chapters are those of the individual contributors, and not necessarily those of the editors or publishers. Editors or publishers are not responsible for the accuracy of the information in the published chapters or consequences of their use. The publisher assumes no responsibility for any damage or grievance to the persons or property arising out of the use of any materials, instructions, methods or thoughts in the book. The editors and the publisher have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission has not been obtained. If any copyright holder has not been acknowledged, please write to us so we may rectify. Notice: Registered trademark of products or corporate names are used only for explanation and identification without intent of infringement. © 2023 Arcler Press ISBN: 978-1-77469-497-8 (Hardcover) Arcler Press publishes wide variety of books and eBooks. For more information about Arcler Press and its products, visit our website at www.arclerpress.com
DECLARATION Some content or chapters in this book are open access copyright free published research work, which is published under Creative Commons License and are indicated with the citation. We are thankful to the publishers and authors of the content and chapters as without them this book wouldn’t have been possible.
ABOUT THE EDITOR
Olga Moreira is a Ph.D. and M.Sc. in Astrophysics and B.Sc. in Physics/Applied Mathematics (Astronomy). She is an experienced technical writer and data analyst. As a graduate student, she held two research grants to carry out her work in Astrophysics at two of the most renowned European institutions in the fields of Astrophysics and Space Science (the European Space Agency, and the European Southern Observatory). She is currently an independent scientist, peer-reviewer and editor. Her research interest is solar physics, machine learning and artificial neural networks.
TABLE OF CONTENTS
List of Contributors .......................................................................................xv List of Abbreviations .................................................................................... xxi Preface.................................................................................................. ....xxiii Chapter 1
Series: Practical Guidance to Qualitative Research. Part 1: Introduction . 1 Abstract ..................................................................................................... 1 Introduction ............................................................................................... 2 Qualitative Research .................................................................................. 2 High-Quality Qualitative Research in Primary Care ................................... 3 Further Education And Reading.................................................................. 4 Acknowledgements ................................................................................... 5 References ................................................................................................. 6
Chapter 2
Series: Practical Guidance to Qualitative Research. Part 2: Context, Research Questions and Designs ............................................................... 7 Abstract ..................................................................................................... 7 Introduction ............................................................................................... 9 Context ...................................................................................................... 9 Research Questions ................................................................................. 10 Designing Qualitative Studies .................................................................. 13 Acknowledgements ................................................................................. 18 References ............................................................................................... 19
Chapter 3
Series: Practical Guidance to Qualitative Research. Part 3: Sampling, Data Collection and Analysis .................................................. 21 Abstract ................................................................................................... 21 Introduction ............................................................................................. 23 Sampling ................................................................................................. 23 Data Collection ....................................................................................... 27 Analysis ................................................................................................... 35
Acknowledgements ................................................................................. 38 References ............................................................................................... 39 Chapter 4
Series: Practical Guidance to Qualitative Research. Part 4: Trustworthiness and Publishing ............................................................... 41 Abstract ................................................................................................... 41 Introduction ............................................................................................. 43 Trustworthiness ........................................................................................ 43 Publishing................................................................................................ 47 Acknowledgements ................................................................................. 50 References ............................................................................................... 51
Chapter 5
Participant Observation as a Data Collection Method ............................ 53 Abstract ................................................................................................... 53 Introduction ............................................................................................. 54 Definitions ............................................................................................... 54 The History of Participant Observation as a Method ................................. 55 Advantages and Disadvantages of Using Participant Observation ............. 59 The Stances of the Observer..................................................................... 62 How Does One Know What to Observe? ................................................. 64 How Does One Conduct an Observation? ............................................... 65 Tips for Collecting Useful Observation Data ............................................ 73 Keeping and Analyzing Field Notes and Writing Up the Findings ............. 76 Teaching Participant Observation ............................................................. 80 Summary ................................................................................................. 84 References ............................................................................................... 85
Chapter 6
Attitudes towards Participation in a Passive Data Collection Experiment ............................................................................. 89 Abstract ................................................................................................... 89 Introduction ............................................................................................. 90 Background ............................................................................................. 91 Methods and Design ................................................................................ 98 Results and Discussion .......................................................................... 103 Conclusions ........................................................................................... 110 Acknowledgments ................................................................................. 111
x
Appendix B. Internal Validity Test of Vignette Responses ........................ 114 Author Contributions ............................................................................. 115 References ............................................................................................. 116 Chapter 7
An Integrative Review on Methodological Considerations in Mental Health Research – Design, Sampling, Data Collection Procedure and Quality Assurance ......................................................... 121 Abstract ................................................................................................. 121 Background ........................................................................................... 123 Methods ................................................................................................ 125 Results ................................................................................................... 129 Discussion ............................................................................................. 142 Conclusion ............................................................................................ 148 Acknowledgements ............................................................................... 149 Authors’ Contributions ........................................................................... 149 References ............................................................................................. 150
Chapter 8
Wiki Surveys: Open and Quantifiable Social Data Collection ............... 155 Abstract ................................................................................................. 155 Introduction ........................................................................................... 156 Wiki Surveys .......................................................................................... 157 Case Studies .......................................................................................... 165 Discussion ............................................................................................. 171 Acknowledgments ................................................................................. 173 Author Contributions ............................................................................. 173 References ............................................................................................. 174
Chapter 9
Towards a Standard Sampling Methodology on Online Social Networks: Collecting Global Trends on Twitter .................................... 181 Abstract ................................................................................................. 182 Introduction ........................................................................................... 182 Related Work ......................................................................................... 185 Problem Definition ................................................................................ 187 Random Strategies ................................................................................. 188 The Alternative Version of the Metropolis-Hastings Algorithm ................ 191 Sampling Global Trends on Twitter ......................................................... 192 Results ................................................................................................... 195 xi
Limitations ............................................................................................. 202 Conclusions ........................................................................................... 202 Acknowledgements ............................................................................... 204 Authors’ Contributions ........................................................................... 204 References ............................................................................................. 205 Chapter 10 Mobile Data Collection: Smart, but Not (Yet) Smart Enough ................ 209 Background ........................................................................................... 209 Smart Mobile Data Collection................................................................ 210 Smarter Mobile Data Collection in the Future ........................................ 212 Conclusions ........................................................................................... 214 Author Contributions ............................................................................. 215 Acknowledgments ................................................................................. 215 References ............................................................................................. 216 Chapter 11 Comparing a Mobile Phone Automated System With a Paper and Email Data Collection System: Substudy Within a Randomized Controlled Trial ..................................................................................... 221 Abstract ................................................................................................. 222 Introduction ........................................................................................... 223 Methods ................................................................................................ 224 Results ................................................................................................... 230 Discussion ............................................................................................. 237 Acknowledgments ................................................................................. 241 References ............................................................................................. 242 Chapter 12 Big Data Collection and Object Participation Willingness: An Analytical Framework from the Perspective of Value Balance ......... 247 Abstract ................................................................................................. 247 The Origin of Research .......................................................................... 248 The Presentation of Analytical Framework.............................................. 250 Conclusion and Prospect ....................................................................... 255 Reference .............................................................................................. 256 Chapter 13 Research on Computer Simulation Big Data Intelligent Collection and Analysis System ............................................ 257 Abstract ................................................................................................. 257 Introduction ........................................................................................... 258 xii
Principles of Big Data Intelligent Fusion................................................. 259 Experimental Simulation Analysis .......................................................... 262 Conclusion ............................................................................................ 265 References ............................................................................................. 266 Chapter 14 Development of a Mobile Application for Smart Clinical Trial Subject Data Collection and Management .................................... 267 Abstract ................................................................................................. 268 Introduction ........................................................................................... 268 Materials and Methods .......................................................................... 270 Results ................................................................................................... 272 Discussion ............................................................................................. 278 Conclusions ........................................................................................... 281 Author Contributions ............................................................................. 282 References ............................................................................................. 283 Chapter 15 The CoronaSurveys System for COVID-19 Incidence Data Collection and Processing ............................................ 287 Introduction ........................................................................................... 288 Data Collection ..................................................................................... 290 Data Analysis ......................................................................................... 294 Data Visualization.................................................................................. 296 Results ................................................................................................... 300 Conclusion ............................................................................................ 301 Author Contributions ............................................................................. 302 References ............................................................................................. 303 Chapter 16 Artificial Intelligence Based Body Sensor Network Framework— Narrative Review: Proposing an End-to-End Framework using Wearable Sensors, Real-Time Location Systems and Artificial Intelligence/Machine Learning Algorithms for Data Collection, Data Mining and Knowledge Discovery in Sports and Healthcare ........ 305 Abstract ................................................................................................. 306 Introduction ........................................................................................... 307 Artificial Intelligence-Based Body Sensor Network Framework: AIBSNF . 318 Specific Applications ............................................................................. 320 General Applications ............................................................................. 322 Limitations And Issues............................................................................ 324 xiii
Conclusion ............................................................................................ 326 Acknowledgements ............................................................................... 326 Authors’ Contributions ........................................................................... 326 References ............................................................................................. 327 Chapter 17 DAViS: a Unified Solution for Data Collection, Analyzation, and Visualization in Real-time Stock Market Prediction........................ 337 Abstract ................................................................................................. 337 Introduction ........................................................................................... 338 Related Literature................................................................................... 343 Preliminary ............................................................................................ 345 The Proposed DAViS Framework ............................................................ 347 Experimental Setup ................................................................................ 361 Experimental Result ............................................................................... 364 Conclusions and Future Direction.......................................................... 375 Acknowledgements ............................................................................... 376 References ............................................................................................. 377 Index ..................................................................................................... 381
xiv
LIST OF CONTRIBUTORS Albine Moser Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands Irene Korstjens Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands Barbara B. Kawulich University of West Georgia Educational Leadership and Professional Studies Department1601 Maple Street, Room 153, Education Annex Carrollton, GA 30118, USA Bence Ságvári Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary Institute of Communication and Sociology, Corvinus University, Fővám tér 8, 1093 Budapest, Hungary Attila Gulyás Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary Júlia Koltai Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary Department of Network and Data Science, Central European University, Quellenstraße 51, 1100 Vienna, Austria Faculty of Social Sciences, Eötvös Loránd University of Sciences, Pázmány Péter Sétány 1/A, 1117 Budapest, Hungary
Eric Badu School of Nursing and Midwifery, The University of Newcastle, Callaghan, Australia Anthony Paul O’Brien Faculty of Health and Medicine, School Nursing and Midwifery, University of Newcastle, Callaghan, Australia Rebecca Mitchell Faculty of Business and Economics, Macquarie University, North Ryde, Australia Matthew J. Salganik Department of Sociology, Center for Information Technology Policy, and Office of Population Research, Princeton University, Princeton, NJ, USA Karen E. C. Levy Information Law Institute and Department of Media, Culture, and Communication, New York University, New York, NY, USA and Data & Society Research Institute, New York, NY, USA C. A. Piña-García Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Departamento de Ciencias de la Computación, Universidad Nacional Autónoma de México, Ciudad de México, México Carlos Gershenson Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Departamento de Ciencias de la Computación, Universidad Nacional Autónoma de México, Ciudad de México, México Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Circuito Maestro Mario de la Cueva S/N, Ciudad Universitaria, Ciudad de México, 04510 México SENSEable City Lab, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, 02139 USA MoBS Lab, Network Science Institute, Northeastern University, 360 Huntington av 1010-177, Boston, 02115 USA ITMO University, Birzhevaya liniya 4, St. Petersburg, 199034 Russia J. Mario Siqueiros-García Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Departamento de Ciencias de la Computación, Universidad Nacional Autónoma de México, Ciudad de México, México
xvi
Alexander Seifert University Research Priority Program “Dynamics of Healthy Aging”, University of Zurich, Zurich, Switzerland Matthias Hofer University Research Priority Program “Dynamics of Healthy Aging”, University of Zurich, Zurich, Switzerland Department of Communication and Media Research, University of Zurich, Zurich, Switzerland Mathias Allemand University Research Priority Program “Dynamics of Healthy Aging”, University of Zurich, Zurich, Switzerland Department of Psychology, University of Zurich, Zurich, Switzerland Diana M Bond, PhD Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, Australia Jeremy Hammond, PhD Strategic Ventures, University of Sydney, Sydney, Australia Antonia W Shand, MB ChB Children’s Hospital at Westmead Clinical School, Faculty of Medicine and Health, University of Sydney, Sydney, Australia Department for Maternal Fetal Medicine, Royal Hospital for Women, Sydney, Australia Natasha Nassar, PhD Sydney School of Public Health, Faculty of Medicine and Health, University of Sydney, Sydney, Australia Children’s Hospital at Westmead Clinical School, Faculty of Medicine and Health, University of Sydney, Sydney, Australia Xiang Huang Guangdong University of Finance and Economics, College of entrepreneurship education, Guangzhou 510320, China Hongying Liu Department of Computer Science and Engineering, Guangzhou College of Technology and Business, Guangzhou 510850, China
xvii
Hyeongju Ryu Biomedical Research Institute, Seoul National University Hospital, Seoul 03080, Korea Meihua Piao Office of Hospital Information, Seoul National University Hospital, Seoul 03080, Korea Heejin Kim Clinical Trials Center, Seoul National University Hospital, Seoul 03080, Korea Wooseok Yang Clinical Trials Center, Seoul National University Hospital, Seoul 03080, Korea Kyung Hwan Kim Department of Thoracic and Cardiovascular Surgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul 03080, Korea Carlos Baquero U. Minho and INESC TEC, Braga, Portugal Paolo Casari Department of Information Engineering and Computer Science, University of Trento, Trento, Italy Antonio Fernandez Anta IMDEA Networks Institute, Madrid, Spain Amanda García-García IMDEA Networks Institute, Madrid, Spain Davide Frey Inria Rennes, Rennes, France Augusto Garcia-Agundez Multimedia Communications Lab, TU Darmstadt, Darmstadt, Germany Chryssis Georgiou Department of Computer Science, University of Cyprus, Nicosia, Cyprus Benjamin Girault Department of Electrical and Computer Engineering University of Southern California, Los Angeles, CA, United States
xviii
Antonio Ortega Department of Electrical and Computer Engineering University of Southern California, Los Angeles, CA, United States Mathieu Goessens Consulting, Rennes, France Harold A. Hernández-Roig Department of Statistics, UC3M & UC3M-Santander Big Data Institute, Getafe, Spain Nicolas Nicolaou Algolysis Ltd, Nicosia, Cyprus Efstathios Stavrakis Algolysis Ltd, Nicosia, Cyprus Oluwasegun Ojo IMDEA Networks Institute and UC3M, Madrid, Spain Julian C. Roberts Skyhaven Media, Liverpool, United Kingdom Ignacio Sanchez InqBarna, Barcelona, Spain Ashwin A. Phatak Institute of Exercise Training and Sport Informatics, German Sports University, Cologne, Germany Franz-Georg Wieland Institute of Physics, University of Freiburg, Freiburg im Breisgau, Germany Kartik Vempala Bloomberg LP, New York, USA Frederik Volkmar Institute of Exercise Training and Sport Informatics, German Sports University, Cologne, Germany Daniel Memmert Institute of Exercise Training and Sport Informatics, German Sports University, Cologne, Germany
xix
Suppawong Tuarob Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand Poom Wettayakorn Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand Ponpat Phetchai Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand Siripong Traivijitkhun Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand Sunghoon Lim Department of Industrial Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Republic of Korea Institute for the 4th Industrial Revolution, Ulsan National Institute of Science and Technology, Ulsan 44919, Republic of Korea Thanapon Noraset Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand Tipajin Thaipisutikul Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom 73170, Thailand
xx
LIST OF ABBREVIATIONS
BSN
Body sensor networks
AIBSNF
Artificial intelligence-based body sensor network framework
RTLS
Real-time location systems
AI/ML
Artificial intelligence and machine learning
ECG
Electrocardiogram
EMG
Electromyogram
ANN
Artificial neural networks
GSR
Galvanic skin resistance
LED
Light emitting diode
IMU
Inertial measurement units
LoS
Line of sight
NLoS
No line of sight
MLFF
Multi-level fusion framework
RA
Rheumatoid arthritis
TCM
Traditional Chinese medicine
PREFACE
We live in an age of Big Data. This is changing the way researchers collect and preprocess data. This book aims to provide a broad view of the current methods and techniques, as well as automated systems for statistical data collection. It is divided into three parts, each focusing on a different aspect of the statistical data collection process. The first part of the book is focused on introducing the readers to qualitative research data collection methods. Qualitative Chapters 1 to 4 include a practical guide by Moser & Korstjens (2017) on the designing, sampling, collecting, and analyzing data about people, processes, and cultures in qualitative research. Chapters 5 to 6 are focused on observation-based methods, participant observation specifically. Chapter 1 introduces the concept of “qualitative research” from the point of view of clinical trials and healthcare sciences. Qualitative research is seen as “the investigation of phenomena, typically in an in-depth and holistic fashion, through the collection of rich narrative materials using a flexible research design”. Chapter 2 is devoted to giving an answer to frequent queries about the context, research questions and design of qualitative research. Chapter 3 is devoted to sampling strategies, as well as data collection and analysis plans. Chapter 4 reflects upon the trustworthiness of the collected data. Chapter 5 includes the various definitions of participant observation, the purposes for which it is used, along with exercises for teaching observation techniques. Chapter 6 includes an exploratory study conducted in Hungary using a factorial design-based online survey to explore the willingness to participate in a future research project based on active and passive data collection via smartphones. The second part of the book is focused on data mining of information collected from clinical and social studies surveys, as well as from social media. Chapter 7 includes a review of methods used in clinical research, from study design to sampling and data collection. Chapters 8 and 9 data collection methods that facilitate quantification of information from online survey respondents and social media. Chapter 8 presents a new method for data collection and data analysis for pairwise wiki surveys using two proofof-concept case studies involving the free and open-source website www.allourideas. org. Chapter 9 proposes a methodology to carry out an efficient data collecting process via three random strategies: Brownian, Illusion and Reservoir. It shows that this new methodology be used to collect global trends on Twitter. Chapters 10 to 11 are focused on mobile data collection methods, and chapters 12 to 13 are focused on big data collection systems. Chapter 10 reflects on many challenges of mobile data collection with smartphones, as well as on the interesting avenues the future development of this technology can provide for clinical research. Chapter 11 compares a web-based mobile phone automated system (MPAS) with the traditional paper and email-based data
collection (PEDC). It demonstrates that MPAS has the potential to be a more effective and acceptable method for improving the overall management, treatment compliance, and methodological quality of clinical research. Chapter 12 proposes an analytical framework, which considers the decision-making of big data objects participating in the big data collection process. This new framework aims to reflect on factors that can improve the participation willingness of big data objects. Chapter 13 proposes a JA-va3D-based big data network multi-resolution acquisition method which has lower acquisition costs, shorter completion times, and higher acquisition accuracy than most current data collection and analysis systems. The third and last part of this book is focused on the current efforts to optimize and automate data collection procedures. Chapter 14 presents the development of a mobile application for collecting subject data for clinical trials which is shown to increase the efficiency of clinical trial management. Chapter 15 describes the CoronaSurveys system developed for facilitating COVID-19 data collection. The proposed system includes multiple components and processes, including the web survey; the mobile apps; the survey responses cleaning and aggregation; the data storage and publication; the data processing and estimates computation; and the results’ visualization. Chapter 16 is focused on machine learning algorithms for data collection, data mining and knowledge discovery in sports and healthcare. It proposes an artificial intelligencebased body sensor network framework (AIBSNF), a framework for strategic use of body sensor networks (BSN), which combines with a real-time location system (RTLS) and wearable biosensors to collect multivariate, low-noise, and high-fidelity data. Chapter 17 introduces DAViS as an automated system for data collection, analysis, and visualization of stock market prediction in real-time. The proposed stock forecasting method outperforms a traditional baseline and confirms that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance.
Chapter
SERIES: PRACTICAL GUIDANCE TO QUALITATIVE RESEARCH. PART 1: INTRODUCTION
1
Albine Mosera,b and Irene Korstjensc Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands a
Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands
b
Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands c
ABSTRACT In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called Frequently Asked Questions. This journal series of four articles intends to provide novice researchers with practical guidance for conducting highCitation: (APA): Moser, A., & Korstjens, I. (2017). Series: Practical guidance to qualitative research. Part 1: Introduction. European Journal of General Practice, 23(1), 271273. (4 pages) Copyright: © This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/).
2
Advanced Techniques for Collecting Statistical Data
quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of papers reporting on qualitative research. This first article describes the key features of qualitative research, provides publications for further learning and reading, and gives an outline of the series. Keywords: Qualitative research, qualitative methodology, phenomena, natural context, emerging design, primary care
INTRODUCTION In the course of our supervisory work over the years, we have noticed that while many researchers who conducted qualitative research for the first time understood the tenets of qualitative research, knowing about qualitative methodology and carrying out qualitative research were two different things. We noticed that they somehow mixed quantitative and qualitative methodology and methods. We also observed that they experienced many uncertainties when doing qualitative research. They expressed a great need for practical guidance regarding key methodological issues. For example, questions often heard and addressed were, ‘What kind of literature would I search for when preparing a qualitative study?’ ‘Is it normal that my research question seems to change during the study?’ ‘What types of sampling can I use?’ ‘What methods of data collection are appropriate?’ ‘Can I wait with my analysis until all data have been collected?’ ‘What are the quality criteria for qualitative research?’ ‘How do I report my qualitative study?’ This induced us to write this series providing ‘practical guidance’ to qualitative research.
QUALITATIVE RESEARCH Qualitative research has been defined as the investigation of phenomena, typically in an in-depth and holistic fashion, through the collection of rich narrative materials using a flexible research design [1]. Qualitative research aims to provide in-depth insights and understanding of real-world problems and, in contrast to quantitative research, it does not introduce treatments, manipulate or quantify predefined variables. Qualitative research encompasses many different designs, which however share several key features as presented in Box 1.
Series: Practical Guidance to Qualitative Research. Part 1: Introduction
3
Box 1. Key features of qualitative research. Qualitative research studies phenomena in the natural contexts of individuals or groups. Qualitative researchers try to gain a deeper understanding of people’s experiences, perceptions, behaviour and processes and the meanings they attach to them. During the research process, researchers use ‘emerging design’ to be flexible in adjusting to the context. Data collection and analysis are iterative processes that happen simultaneously as the research progresses.
Qualitative research is associated with the constructivist or naturalistic paradigm, which began as a countermovement to the positivistic paradigm associated with quantitative research. Where positivism assumes that there is an orderly reality that can be objectively studied, constructivism holds that there are multiple interpretations of reality and that the goal of the research is to understand how individuals construct reality within their natural context [1].
HIGH-QUALITY QUALITATIVE RESEARCH IN PRIMARY CARE Qualitative research is a vital aspect of research in primary care and qualitative studies with a clear and important clinical message can be highly cited [2,3]. This series intends to provide novice researchers an introduction to information about conducting high-quality qualitative research in the field of primary care. By novice researchers, we mean Master’s students and junior researchers in primary care as well as experienced quantitative researchers who are engaging in qualitative research for the first time. As primary care is an interprofessional field, we bear in mind that our readers have different backgrounds, e.g. general practice, nursing, maternity care, occupational therapy, physical therapy and health sciences. This series is not a straightforward ‘cookbook’ but a source to consult when engaging in qualitative research. We neither explain all the details nor deliver an emergency kit to solve the sort of problems that all qualitative researchers encounter at least once in their lifetimes, such as failing audio recorders. We do focus on topics that have evoked a lot of questions and worries among novice researchers; the so-called frequently asked questions (FAQs). We aim to provide researchers with practical guidance for doing qualitative research. For the journal’s editorial policy, it will serve as a standard for qualitative research papers. For those who are not involved
4
Advanced Techniques for Collecting Statistical Data
in qualitative research on a daily basis, this series might be used as an introduction to understanding what high-quality qualitative research entails. This way, the series will also provide readers, reviewers and editors with references to criteria and tools for judging the quality of papers reporting on qualitative research.
FURTHER EDUCATION AND READING As in quantitative research, qualitative research requires excellent methodology. Therefore, researchers in primary care need to be sufficiently trained in this type of research [2]. We hope that this series will function as a stepping stone towards participation in relevant national and international qualitative research courses or networks and will stimulate reading books and articles on qualitative research. During our supervisory work, researchers have mentioned examples of books on qualitative research that helped them in striving to perform outstanding qualitative research in primary care. Box 2 presents a selection of these books and the BMJ 2008 series on qualitative research for further reading. Box 2. Examples of publications on qualitative research. Brinkmann S, Kvale S. Interviews. Learning the craft of qualitative research interviewing. 3rd ed. Sage: London; 2014. Bourgeault I, Dingwall R, de Vries R. The SAGE handbook of qualitative methods in health Research. 1st ed. Sage: London; 2010. a
Creswell JW. Qualitative research design. Choosing among five approaches. 3rd ed. Sage: Los Angeles (CA); 2013. a Denzin NK, Lincoln YS. The SAGE handbook of qualitative research. 4th ed. Sage: London; 2011.
Gray DE. Doing research in the real world. 3rd ed. Sage: London; 2013. Holloway I & Wheeler S. Qualitative research in nursing and healthcare. 3rd ed. WileyBlackwell: Chichester; 2010. Miles MB, Huberman AM, Saldana J. Qualitative data analysis. A methods sourcebook. 3rd ed. Sage: Los Angeles (CA); 2014. a
Morgan DL, Krueger RA. Focus group kit. Volumes 1–6. Sage: London; 1997. Polit DF & Beck CT. Nursing Research: Generating and assessing evidence for nursing practice. 10th ed. Lippincott, Williams & Wilkins: Philadelphia (PA); 2017. Pope C, Van Royen P, Baker R. Qualitative methods in research on healthcare quality. Qual Saf Health Care 2002, 11: 148–152. Salmons J. Qualitative online interviews. 2nd ed. Sage: London; 2015. Silverman D. Doing qualitative research. 4th ed. Sage: London; 2013.
Series: Practical Guidance to Qualitative Research. Part 1: Introduction
5
Starks H, Trinidad SB. Choose your method: A comparison of phenomenology, discourse analysis and grounded theory. Qual Health Res 2007;17:1372–1380. Tracy SJ. Qualitative quality: Eight ‘big-tent’ criteria for excellent qualitative research. Qual Inq 2010; 16(10):837–851. BMJ series on qualitative research, published online 7 August 2008: Kuper A, Reeves S, Levinson W. An introduction to reading and appraising qualitative research. BMJ 2008;337:a288. Reeves S, Albert M, Kuper A, Hodges BD. Why use theories in qualitative research? BMJ 2008; 337:a949. Hodges BD, Kuper A, Reeves S. Discourse analysis. BMJ 2008;337:a879. Kuper A, Lingard L, Levinson W. Critically appraising qualitative research. BMJ 2008;337:a1035. Reeves S, Kuper A, Hodges BD. Qualitative research methodologies: ethnography. BMJ 2008;337:a1020. Lingard L, Albert M, Levinson W. Grounded theory, mixed methods, and action research. BMJ 2008;337:a567. a
For advanced learning.
ACKNOWLEDGEMENTS The authors wish to thank the following junior researchers who have been participating for the last few years in the so-called ‘think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.
6
Advanced Techniques for Collecting Statistical Data
REFERENCES 1.
2.
3.
Polit DF, Beck CT.. Nursing research: generating and assessing evidence for nursing practice. 10th ed. Philadelphia (PA): Lippincott, Williams & Wilkins; 2017. Hepworth J, Key M.. General practitioners learning qualitative research: a case study of postgraduate education. Aust Fam Physician 2015;44:760–763. Greenhalgh T, Annandale E, Ashcroft R, et al.. An open letter to the BMJ editors on qualitative research. BMJ. 2016;352:i563.
Chapter
SERIES: PRACTICAL GUIDANCE TO QUALITATIVE RESEARCH. PART 2: CONTEXT, RESEARCH QUESTIONS AND DESIGNS
2
Irene Korstjensa and Albine Moserb,c Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands; a
Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands;
b
Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands c
ABSTRACT In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality Citation: (APA): Korstjens, I., & Moser, A. (2017). Series: Practical guidance to qualitative research. Part 2: Context, research questions and designs. European Journal of General Practice, 23(1), 274-279. (7 pages) Copyright: © This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/).
Advanced Techniques for Collecting Statistical Data
8
qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. This second article addresses FAQs about context, research questions and designs. Qualitative research takes into account the natural contexts in which individuals or groups function to provide an in-depth understanding of real-world problems. The research questions are generally broad and open to unexpected findings. The choice of a qualitative design primarily depends on the nature of the research problem, the research question(s) and the scientific knowledge one seeks. Ethnography, phenomenology and grounded theory are considered to represent the ‘big three’ qualitative approaches. Theory guides the researcher through the research process by providing a ‘lens’ to look at the phenomenon under study. Since qualitative researchers and the participants of their studies interact in a social process, researchers influence the research process. The first article described the key features of qualitative research, the third article will focus on sampling, data collection and analysis, while the last article focuses on trustworthiness and publishing. Keywords: General practice/family medicine, general qualitative designs and methods Key points on context, research questions and designs •
•
•
• •
Research questions are generally, broad and open to unexpected findings, and depending on the research process might change to some extent. The SPIDER tool is more suited than PICO for searching for qualitative studies in the literature, and can support the process of formulating research questions for original studies. The choice of a qualitative design primarily depends on the nature of the research problem, the research question, and the scientific knowledge one seeks. Theory guides the researcher through the research process by providing a ‘lens’ to look at the phenomenon under study. Since qualitative researchers and the participants interact in a social process, the researcher influences the research process.
Series: Practical Guidance to Qualitative Research. Part 2: Context...
9
INTRODUCTION In an introductory paper [1], we have described the key features of qualitative research. The current article addresses frequently asked questions about context, research questions and design of qualitative research.
CONTEXT Why is context important? Qualitative research takes into account the natural contexts in which individuals or groups function, as its aim is to provide an in-depth understanding of realworld problems [2]. In contrast to quantitative research, generalizability is not a guiding principle. According to most qualitative researchers, the ‘reality’ we perceive is constructed by our social, cultural, historical and individual contexts. Therefore, you look for variety in people to describe, explore or explain phenomena in real-world contexts. Influence from the researcher on the context is inevitable. However, by striving to minimalize your interfering with people’s natural settings, you can get a ‘behind the scenes’ picture of how people feel or what other forces are at work, which may not be discovered in a quantitative investigation. Understanding what practitioners and patients think, feel or do in their natural context, can make clinical practice and evidence-based interventions more effective, efficient, equitable and humane. For example, despite their awareness of widespread family violence, general practitioners (GPs) seem to be hesitant to ask about intimate partner violence. By applying a qualitative research approach, you might explore how and why practitioners act this way. You need to understand their context to be able to interact effectively with them, to analyse the data, and report your findings. You might consider the characteristics of practitioners and patients, such as their age, marital status, education, health condition, physical environment or social circumstances, and how and where you conduct your observations, interviews and group discussions. By giving your readers a ‘thick description’ of the participants’ contexts you render their behaviour, experiences, perceptions and feelings meaningful. Moreover, you enable your readers to consider whether and how the findings of your study can be transferred to their contexts.
10
Advanced Techniques for Collecting Statistical Data
RESEARCH QUESTIONS Why should the research question be broad and open? To enable a thorough in-depth description, exploration or explanation of the phenomenon under study, in general, research questions need to be broad and open to unexpected findings. Within more in-depth research, for example, during building theory in a grounded theory design, the research question might be more focused. Where quantitative research asks: ‘how many, how much, and how often?’ qualitative research would ask: ‘what?’ and even more ‘how, and why?’ Depending on the research process, you might feel a need for fine-tuning or additional questions. This is common in qualitative research as it works with ‘emerging design,’ which means that it is not possible to plan the research in detail at the start, as the researchers have to be responsive to what they find as the research proceeds. This flexibility within the design is seen as a strength in qualitative research but only within an overall coherent methodology.
What kind of literature would I search for when preparing a qualitative study? You would search for literature that can provide you with insights into the current state of knowledge and the knowledge gap that your study might address (Box 1). You might look for original quantitative, mixed-method and qualitative studies, or reviews such as quantitative meta-analyses or qualitative meta-syntheses. These findings would give you a picture of the empirical knowledge gap and the qualitative research questions that might lead to relevant and new insights and useful theories, models or concepts for studying your topic. When little knowledge is available, a qualitative study can be a useful starting point for subsequent studies. If in preparing your qualitative study, you cannot find sufficient literature about your topic, you might turn to proxy literature to explore the landscape around your topic. For example, when you are one of the very first researchers to study shared decision-making or health literacy in maternity care for disadvantaged parents-to-be, you might search for existing literature on these topics in other healthcare settings, such as general practice.
Series: Practical Guidance to Qualitative Research. Part 2: Context...
11
Box 1. Searching the literature for qualitative studies: the SPIDER tool. Based on Cooke et al. [3]. S
Sample: qualitative research uses smaller samples, as findings are not intended to be generalized to the general population.
PI
Phenomenon of Interest: qualitative research examines how and why certain experiences, behaviours and decisions occur (in contrast to the effectiveness of intervention).
D
Design: refers to the theoretical framework and the corresponding method used, which influence the robustness of the analysis and findings.
E
Evaluation: evaluation outcomes may include more subjective outcomes (views, attitudes, perspectives, experiences, etc.).
R
Research type: qualitative, quantitative and mixed-methods research could be searched for.
Why do qualitative researchers prefer SPIDER to PICO? The SPIDER tool (sample-phenomenon of interest-design-evaluationresearch type) (Box 1) is one of the available tools for qualitative literature searches [3]. It has been specifically developed for qualitative evidence synthesis, making it more suitable than PICO (population-interventioncomparison-outcome) in searching for qualitative studies that focus on understanding real-life experiences and processes of a variety of participants. PICO is primarily a tool for collecting evidence from published quantitative research on prognoses, diagnoses and therapies. Quantitative studies mostly use larger samples, comparing intervention and control groups, focusing on quantification of predefined outcomes at group level that can be generalized to larger populations. In contrast, qualitative research studies smaller samples in greater depth; it strives to minimalize manipulating their natural settings and is open to rich and unexpected findings. To suit this approach, the SPIDER tool was developed by adapting the PICO tool. Although these tools are meant for searching the literature, they can also be helpful in formulating research questions for original studies. Using SPIDER might support you in formulating a broad and open qualitative research question. An example of an SPIDER-type question for a qualitative study using interviews is: ‘What are young parents’ experiences of attending antenatal education?’ The abstract and introduction of a manuscript might contain this broad and open research question, after which the methods section provides further operationalization of the elements of the SPIDER tool, such as (S) young mothers and fathers, aged 17–27 years, 1–12 months after childbirth, low to high educational levels, in urban or semi-urban regions; (PI) experiences of antenatal education in group sessions during pregnancy
12
Advanced Techniques for Collecting Statistical Data
guided by maternity care professionals; (D) phenomenology, interviews; (E) perceived benefits and costs, psychosocial and peer support received, changes in attitude, expectations, and perceived skills regarding healthy lifestyle, childbirth, parenthood, etc.; and (R) qualitative.
Is it normal that my research question seems to change during the study? During the research process, the research question might change to a certain degree because data collection and analysis sharpens the researcher’s lenses. Data collection and analysis are iterative processes that happen simultaneously as the research progresses. This might lead to a somewhat different focus of your research question and to additional questions. However, you cannot radically change your research question because that would mean you were performing a different study. In the methods section, you need to describe how and explain why the original research question was changed. For example, let us return to the problem that GPs are hesitant to ask about intimate partner violence despite their awareness of family violence. To design a qualitative study, you might use SPIDER to support you in formulating your research question. You purposefully sample GPs, varying in age, gender, years of experience and type of practice (S-1). You might also decide to sample patients, in a variety of life situations, who have been faced with the problem (S-2). You clarify the phenomenon of family violence, which might be broadly defined when you design your study—e.g. family abuse and violence (PI-1). However, as your study evolves you might feel the need for fine-tuning—e.g. asking about intimate partner violence (PI-2). You describe the design, for instance, a phenomenological study using interviews (D), as well as the ‘think, feel or do’ elements you want to evaluate in your qualitative research. Depending on what is already known and the aim of your research, you might choose to describe actual behaviour and experiences (E-1) or explore attitudes and perspectives (E-2). Then, as your study progresses, you also might want to explain communication and follow-up processes (E-3) in your qualitative research (R). Each of your choices will be a trade-off between the intended variety,
Series: Practical Guidance to Qualitative Research. Part 2: Context...
13
depth and richness of your findings and the required samples, methods, techniques and efforts for data collection and analyses. These choices lead to different research questions, for example: • •
‘What are GPs’ and patients’ attitudes and perspectives towards discussing family abuse and violence?’ Or: ‘How do GPs behave during the communication and followup process when a patient’s signals suggest intimate partner violence?’
DESIGNING QUALITATIVE STUDIES How do I choose a qualitative design? As in quantitative research, you base the choice of a qualitative design primarily on the nature of the research problem, the research question and the scientific knowledge you seek. Therefore, instead of simply choosing what seems easy or interesting, it is wiser to first consider and discuss with other qualitative researchers the pros and cons of different designs for your study. Then, depending on your skills and your knowledge and understanding of qualitative methodology and your research topic, you might seek training or support from other qualitative researchers. Finally, just as in quantitative research, the resources and time available and your access to the study settings and participants also influence the choices you make in designing the study.
What are the most important qualitative designs? Ethnography [4], phenomenology [5], and grounded theory [6] are considered the ‘big three’ qualitative approaches [7] (Box 2). Box 2 shows that they stem from different theoretical disciplines and are used in various domains focusing on different areas of inquiry. Furthermore, qualitative research has a rich tradition of various designs [2]. Box 3 presents other qualitative approaches such as case studies [8], conversation analysis [9], narrative research [10], hermeneutic research [11], historical research [12], participatory action research and [13], participatory community research [14], and research based on critical social theory [15], for example, feminist research or empowerment evaluation [16]. Some researchers do not mention a specific qualitative approach or research tradition but use a descriptive generic research [17] or say that they used thematic analysis or content
14
Advanced Techniques for Collecting Statistical Data
analysis, an analysis of themes and patterns that emerge in the narrative content from a qualitative study [2]. This form of data analysis will be addressed in Part 3 of our series. Box 2. The ‘big three’ approaches in qualitative study design. Based on Polit and Beck [2]. Box 3. Definitions of other qualitative research approaches. Based on Polit and Beck [2]. Ethnography
Phenomenology
Grounded theory
Definition
A branch of human enquiry, associated with anthropology that focuses on the culture of a group of people, with an effort to understand the world view of those under study.
A qualitative research tradition, with roots in philosophy and psychology, that focuses on the lived experience of humans.
A qualitative research methodology with roots in sociology that aims to develop theories grounded in real-world observations.
Discipline
Anthropology
Psychology, philosophy
Sociology
Domain
Culture
Lived experience
Social settings
Area of inquiry
Holistic view of a culture.
Experiences of individuals within their experiential world or ‘life-world’.
Social structural process within a social setting.
Focus
Understanding the meanings and behaviours associated with the membership of groups, teams, etc.
Exploring how individuals make sense of the world to provide insightful accounts of their subjective experience.
Building theories about social phenomena.
Case study
A research method involving a thorough, in-depth analysis of an individual, group or other social unit.
Conversation analysis
Form of discourse analysis, a qualitative tradition from the discipline of sociolinguistics that seeks to understand the rules, mechanisms, and structure of conversations.
Critical social theory
An approach to viewing the world that involves a critique of society, with the goal of envisioning new possibilities and effecting social change.
Feminist research
Research that seeks to understand, typically through qualitative approaches, how gender and a gendered social order shape women’s lives and their consciousness.
Series: Practical Guidance to Qualitative Research. Part 2: Context...
15
Hermeneutics
A qualitative research tradition, drawing on interpretative phenomenology that focuses on the lived experience of humans, and how they interpret those experiences.
Historical research
Systematic studies designed to discover facts and relationships about past events.
Narrative research
A narrative approach that focuses on the story as the object of the inquiry.
Participatory action research
A collaborative research approach between researchers and participants based on the premise that the production of knowledge can be political and used to exert power.
Community-based participatory research
A research approach that enlists those who are most affected by a community issue—typically in collaboration or partnership with others who have research skills—to conduct research on and analyse that issue, with the goal to resolve it.
Content analysis
The process or organizing and integrating material from documents, often-narrative information from a qualitative study, according to key concepts and themes.
Depending on Your Research Question, You might Choose One of the ‘Big Three’ Designs Let us assume that you want to study the caring relationship in palliative care in a primary care setting for people with COPD. If you are interested in the care provided by family caregivers from different ethnic backgrounds, you will want to investigate their experiences. Your research question might be ‘What constitutes the caring relationship between GPs and family caregivers in the palliative care for people with COPD among family caregivers of Moroccan, Syrian, and Iranian ethnicity?’ Since you are interested in the caring relationship within cultural groups or subgroups, you might choose ethnography. Ethnography is the study of culture within a society, focusing on one or more groups. Data is collected mostly through observations, informal (ethnographic) conversations, interviews and/or artefacts. The findings are presented in a lengthy monograph where concepts and patterns are presented in a holistic way using context-rich description. If you are interested in the experiential world or ‘life-world’ of the family caregivers and the impact of caregiving on their own lives, your research question might be ‘What is the lived experience of being a family caregiver for a family member with COPD whose end is near?’ In such a case, you might choose phenomenology, in which data are collected through in-depth interviews. The findings are presented in detailed descriptions of participants’ experiences, grouped in themes.
16
Advanced Techniques for Collecting Statistical Data
If you want to study the interaction between GPs and family caregivers to generate a theory of ‘trust’ within caring relationships, your research question might be ‘How does a relationship of trust between GPs and family caregivers evolve in end-of-life care for people with COPD?’ Grounded theory might then be the design of the first choice. In this approach, data are collected mostly through in-depth interviews, but may also include observations of encounters, followed by interviews with those who were observed. The findings presented consist of a theory, including a basic social process and relevant concepts and categories. If you merely aim to give a qualitative description of the views of family caregivers about facilitators and barriers to contacting GPs, you might use content analysis and present the themes and subthemes you found.
What is the role of theory in qualitative research? The role of theory is to guide you through the research process. Theory supports formulating the research question, guides data collection and analysis, and offers possible explanations of underlying causes of or influences on phenomena. From the start of your research, theory provides you with a ‘lens’ to look at the phenomenon under study. During your study, this ‘theoretical lens’ helps to focus your attention on specific aspects of the data and provides you with a conceptual model or framework for analysing them. It supports you in moving beyond the individual ‘stories’ of the participants. This leads to a broader understanding of the phenomenon of study and a wider applicability and transferability of the findings, which might help you formulate new theory, or advance a model or framework. Note that research does not need to be always theory-based, for example, in a descriptive study, interviewing people about perceived facilitators and barriers for adopting new behaviour.
What is my role as a researcher? As a qualitative researcher, you influence the research process. Qualitative researchers and the study participants always interact in a social process. You build a relationship midst data collection, for the short-term in an interview, or for the long-term during observations or longitudinal studies. This influences the research process and its findings, which is why your report needs to be transparent about your perspective and explicitly acknowledge your subjectivity. Your role as a qualitative researcher requires empathy as well as distance. By empathy, we mean that you can put yourself into the
Series: Practical Guidance to Qualitative Research. Part 2: Context...
17
participants’ situation. Empathy is needed to establish a trusting relationship but might also bring about emotional distress. By distance, we mean that you need to be aware of your values, which influence your data collection, and that you have to be non-judgemental and non-directive. There is always a power difference between the researcher and participants. Especially, feminist researchers acknowledge that the research is done by, for, and about women and the focus is on gender domination and discrimination. As a feminist researcher, you would try to establish a trustworthy and non-exploitative relationship and place yourself within the study to avoid objectification. Feminist research is transformative to change oppressive structures for women [16].
What ethical issues do I need to consider? Although qualitative researchers do not aim to intervene, their interaction with participants requires careful adherence to the statement of ethical principles for medical research involving human subjects as laid down in the Declaration of Helsinki [18]. It states that healthcare professionals involved in medical research are obliged to protect the life, health, dignity, integrity, right to self-determination, privacy and confidentiality of personal information of research subjects. The Declaration also warrants that all vulnerable groups and individuals should receive specifically considered protection. This is also relevant when working in contexts of low-income countries and poverty. Furthermore, researchers must consider the ethical, legal and regulatory norms and standards in their own countries, as well as applicable international norms and standards. You might contact your local Medical Ethics Committee before setting up your study. In some countries, Medical Ethics Committees do not review qualitative research [2]. In that case, you will have to adhere to the Declaration of Helsinki [18], and you might seek approval from a research committee at your institution or the board of your institution. In qualitative research, you have to ensure anonymity by code numbering the tapes and transcripts and removing any identifying information from the transcripts. When you work with transcription offices, they will need to sign a confidentiality agreement. Even though the quotes from participants in your manuscripts are anonymized, you cannot always guarantee full confidentiality. Therefore, you might ask participants special permission for using these quotes in scientific publications.
18
Advanced Techniques for Collecting Statistical Data
The next article in this Series on qualitative research, Part 3, will focus on sampling, data collection, and analysis [19]. In the final article, Part 4, we address two overarching themes: trustworthiness and publishing [20].
ACKNOWLEDGEMENTS The authors thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.
Series: Practical Guidance to Qualitative Research. Part 2: Context...
19
REFERENCES 1. 2.
3. 4. 5. 6. 7. 8. 9. 10. 11.
12.
13. 14. 15. 16.
Moser A, Korstjens I.. Series: Practical guidance to qualitative research. Part 1: Introduction. Eur J Gen Pract. 2017;23:271-273. Polit DF, Beck CT, Nursing research: Generating and assessing evidence for nursing practice. 10th ed. Philadelphia (PA): Lippincott, Williams & Wilkins; 2017. Cooke A, Smith D, Booth A.. Beyond PICO: the SPIDER tool for qualitative evidence synthesis. Qual Health Res. 2012; 22:1435–1443. Atkinson P, Coffey A, Delamount S, et al.. Handbook of ethnography. Thousand Oaks (CA): Sage; 2001. Smith JA, Flowers P, Larkin M.. Interpretative phenomenological analysis. theory, method and research. London (UK): Sage; 2010. Charmaz K. Constructing grounded theory. 2nd ed. Thousand Oaks (CA): Sage; 2014. Creswell JW. Qualitative research design. Choosing among five approaches. 3rd ed. Los Angeles (CA): Sage; 2013. Yin R. Case study research: design and methods (5th ed.). Thousand Oaks (CA): Sage; 2014. Ten HP. Doing conversation analysis (2nd ed). London (UK): Sage; 2007. Riessman CK. Narrative methods for the human sciences. Thousand Oaks (CA): Sage; 2008. Fleming V, Gaidys U, Robb Y.. Hermeneutic research in nursing: developing a Gadamerian-based research method. Nurs Inq. 2003;10:113–120. Lundy KS, Historical research. In Munhall PL, ed. Nursing research: a qualitative perspective. 5th ed. (pp 381–398). Sudbury (MA): Jones & Bartlett; 2012. Koch T, Kralik D.. Participatory action research in health care. Oxford (UK): Blackwell; 2006. Minkler M & Wallerstein N, editors. Community-based participatory research for health. San Francisco (CA): Jossey-Bass Publishers; 2003. Dant T. Critical social theory. Culture, society and critique. London (UK): Sage; 2004. Hesse-Biber S (editor). Feminist research practice: a primer. Thousand Oaks (CA): Sage; 2014.
20
Advanced Techniques for Collecting Statistical Data
17. Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. 2010;23:334–340. 18. World Medical Association . Declaration of Helsinki. Ethical principles for medical research involving human subjects. 2013. [Internet]; [cited 2017 Aug 9]. Available from: https://www.wma.net/policies-post/ wma-declaration-of-helsinki-ethical-principles-for-medical-researchinvolving-human-subjects/ 19. Moser A, Korstjens I.. Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. Eur J Gen Pract. 2018;24. DOI: 10.1080/13814788.2017.1375091. 20. Korstjens I, Moser A.. Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. Eur J Gen Pract. 2018;24. DOI: 10.1080/13814788.2017.1375092.
Chapter
SERIES: PRACTICAL GUIDANCE TO QUALITATIVE RESEARCH. PART 3: SAMPLING, DATA COLLECTION AND ANALYSIS
3
Albine Moser and Irene Korstjens Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands a
Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands
b
c Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands
ABSTRACT In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting highCitation: (APA): Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European journal of general practice, 24(1), 9-18. (11 pages) Copyright: © 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/).
Advanced Techniques for Collecting Statistical Data
22
quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research. Keywords: General practice/family medicine, general qualitative designs and methods, sampling, data collection, analysis Key points on sampling, data collection and analysis • •
• •
•
The data collection plan needs to be broadly defined and open during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and is different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses of ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory or a descriptive summary, respectively.
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
23
INTRODUCTION This article is the third paper in a series of four articles aiming to provide practical guidance to qualitative research. In an introductory paper, we have described the objective, nature and outline of the Series [1]. Part 2 of the series focused on context, research questions and design of qualitative research [2]. In this paper, Part 3, we address frequently asked questions (FAQs) about sampling, data collection and analysis.
SAMPLING What is a sampling plan? A sampling plan is a formal plan specifying a sampling method, a sample size, and procedure for recruiting participants (Box 1) [3]. A qualitative sampling plan describes how many observations, interviews, focus-group discussions or cases are needed to ensure that the findings will contribute rich data. In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data. The key features of a qualitative sampling plan are as follows. First, participants are always sampled deliberately. Second, sample size differs for each study and is small. Third, the sample will emerge during the study: based on further questions raised in the process of data collection and analysis, inclusion and exclusion criteria might be altered, or the sampling sites might be changed. Finally, the sample is determined by conceptual requirements and not primarily by representativeness. You, therefore, need to provide a description of and rationale for your choices in the sampling plan. The sampling plan is appropriate when the selected participants and settings are sufficient to provide the information needed for a full understanding of the phenomenon under study.
24
Advanced Techniques for Collecting Statistical Data
Box 1. Sampling strategies in qualitative research. Based on Polit & Beck [3]. Sampling
Definition
Purposive sampling
Selection of participants based on the researchers’ judgement about what potential participants will be most informative.
Criterion sampling
Selection of participants who meet pre-determined criteria of importance.
Theoretical sampling
Selection of participants based on the emerging findings to ensure adequate representation of theoretical concepts.
Convenience sampling
Selection of participants who are easily available.
Snowball sampling
Selection of participants through referrals by previously selected participants or persons who have access to potential participants.
Maximum variation sampling
Selection of participants based on a wide range of variation in backgrounds.
Extreme case sampling
Purposeful selection of the most unusual cases.
Typical case sampling
Selection of the most typical or average participants.
Confirming and disconfirming sampling
Confirming and disconfirming cases sampling supports checking or challenging emerging trends or patterns in the data.
Some practicalities: a critical first step is to select settings and situations where you have access to potential participants. Subsequently, the best strategy to apply is to recruit participants who can provide the richest information. Such participants have to be knowledgeable on the phenomenon and can articulate and reflect, and are motivated to communicate at length and in depth with you. Finally, you should review the sampling plan regularly and adapt when necessary.
What sampling strategies can I use? Sampling is the process of selecting or searching for situations, context and/ or participants who provide rich data of the phenomenon of interest [3]. In qualitative research, you sample deliberately, not at random. The most commonly used deliberate sampling strategies are purposive sampling, criterion sampling, theoretical sampling, convenience sampling and snowball sampling. Occasionally, the ‘maximum variation,’ ‘typical cases’ and ‘confirming and disconfirming’ sampling strategies are used. Key informants need to be carefully chosen. Key informants hold special and expert knowledge about the phenomenon to be studied and are willing to share information and insights with you as the researcher [3]. They also help to gain access to participants, especially when groups are studied. In addition, as researcher, you can validate your ideas and perceptions with those of the key informants.
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
25
What is the connection between sampling types and qualitative designs? The ‘big three’ approaches of ethnography, phenomenology, and grounded theory use different types of sampling. In ethnography, the main strategy is purposive sampling of a variety of key informants, who are most knowledgeable about a culture and are able and willing to act as representatives in revealing and interpreting the culture. For example, an ethnographic study on the cultural influences of communication in maternity care will recruit key informants from among a variety of parents-to-be, midwives and obstetricians in midwifery care practices and hospitals. Phenomenology uses criterion sampling, in which participants meet predefined criteria. The most prominent criterion is the participant’s experience with the phenomenon under study. The researchers look for participants who have shared an experience, but vary in characteristics and in their individual experiences. For example, a phenomenological study on the lived experiences of pregnant women with psychosocial support from primary care midwives will recruit pregnant women varying in age, parity and educational level in primary midwifery practices. Grounded theory usually starts with purposive sampling and later uses theoretical sampling to select participants who can best contribute to the developing theory. As theory construction takes place concurrently with data collection and analyses, the theoretical sampling of new participants also occurs along with the emerging theoretical concepts. For example, one grounded theory study tested several theoretical constructs to build a theory on autonomy in diabetes patients [4]. In developing the theory, the researchers started by purposefully sampling participants with diabetes differing in age, onset of diabetes and social roles, for example, employees, housewives, and retired people. After the first analysis, researchers continued with theoretically sampling, for example, participants who differed in the treatment they received, with different degrees of care dependency, and participants who receive care from a general practitioner (GP), at a hospital or from a specialist nurse, etc. In addition to the ‘big three’ approaches, content analysis is frequently applied in primary care research, and very often uses purposive, convenience, or snowball sampling. For instance, a study on peoples’ choice of a hospital for elective orthopaedic surgery used snowball sampling [5]. One elderly person in the private network of one researcher personally approached
26
Advanced Techniques for Collecting Statistical Data
potential respondents in her social network by means of personal invitations (including letters). In turn, respondents were asked to pass on the invitation to other eligible candidates. Sampling is also dependent on the characteristics of the setting, e.g., access, time, vulnerability of participants, and different types of stakeholders. The setting, where sampling is carried out, is described in detail to provide thick description of the context, thereby, enabling the reader to make a transferability judgement (see Part 3: transferability). Sampling also affects the data analysis, where you continue decision-making about whom or what situations to sample next. This is based on what you consider as still missing to get the necessary information for rich findings (see Part 1: emergent design). Another point of attention is the sampling of ‘invisible groups’ or vulnerable people. Sampling of these participants would require applying multiple sampling strategies, and more time calculated in the project planning stage for sampling and recruitment [6].
How do sample size and data saturation interact? A guiding principle in qualitative research is to sample only until data saturation has been achieved. Data saturation means the collection of qualitative data to the point where a sense of closure is attained because new data yield redundant information [3]. Data saturation is reached when no new analytical information arises anymore, and the study provides maximum information on the phenomenon. In quantitative research, by contrast, the sample size is determined by a power calculation. The usually small sample size in qualitative research depends on the information richness of the data, the variety of participants (or other units), the broadness of the research question and the phenomenon, the data collection method (e.g., individual or group interviews) and the type of sampling strategy. Mostly, you and your research team will jointly decide when data saturation has been reached, and hence whether the sampling can be ended and the sample size is sufficient. The most important criterion is the availability of enough in-depth data showing the patterns, categories and variety of the phenomenon under study. You review the analysis, findings, and the quality of the participant quotes you have collected, and then decide whether sampling might be ended because of data saturation. In many cases, you will choose to carry out two or three more observations or interviews or an additional focus group discussion to confirm that data saturation has been reached.
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
27
When designing a qualitative sampling plan, we (the authors) work with estimates. We estimate that ethnographic research should require 25–50 interviews and observations, including about four-to-six focus group discussions, while phenomenological studies require fewer than 10 interviews, grounded theory studies 20–30 interviews and content analysis 15–20 interviews or three-to-four focus group discussions. However, these numbers are very tentative and should be very carefully considered before using them. Furthermore, qualitative designs do not always mean small sample numbers. Bigger sample sizes might occur, for example, in content analysis, employing rapid qualitative approaches, and in large or longitudinal qualitative studies.
DATA COLLECTION What methods of data collection are appropriate? The most frequently used data collection methods are participant observation, interviews, and focus group discussions. Participant observation is a method of data collection through the participation in and observation of a group or individuals over an extended period of time [3]. Interviews are another data collection method in which an interviewer asks the respondents questions [6], face-to-face, by telephone or online. The qualitative research interview seeks to describe the meanings of central themes in the life world of the participants. The main task in interviewing is to understand the meaning of what participants say [5]. Focus group discussions are a data collection method with a small group of people to discuss a given topic, usually guided by a moderator using a questioning-route [8]. It is common in qualitative research to combine more than one data collection method in one study. You should always choose your data collection method wisely. Data collection in qualitative research is unstructured and flexible. You often make decisions on data collection while engaging in fieldwork, the guiding questions being with whom, what, when, where and how. The most basic or ‘light’ version of qualitative data collection is that of open questions in surveys. Box 2 provides an overview of the ‘big three’ qualitative approaches and their most commonly used data collection methods.
28
Advanced Techniques for Collecting Statistical Data
Box 2. Qualitative data collection methods. Definition
Aim
Ethno- PhenoGrounded Content graphy menology theory analysis
Participants of Participation in To obtain a close Suitable observations and observa- and intimate tion of people familiarity with or groups. a given group of individuals and their practices through intensive involvement with people in their environment, usually over an extended period.
Very rare
Sometimes
Suitable
Face-to-face in-depths Interviews
A conversation where the researcher poses questions and the participants provide answers faceto-face, by telephone or via mail.
To elicit the par- Suitable Suitable ticipant’s experiences, perceptions, thoughts and feelings.
Suitable
Focus group discussion
Interview with a group of participants to answer questions on a specific topic face-to-face or via mail; people who participate interact with each other.
To examine Suitable different experiences, perceptions, thoughts and feelings among various participants or parties.
Sometimes Suitable
What role should I adopt when conducting participant observations? What is important is to immerse yourself in the research setting, to enable you to study it from the inside. There are four types of researcher involvement in observations, and in your qualitative study, you may apply all four. In the first type, as ‘complete participant’, you become part of the setting and
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
29
play an insider role, just as you do in your own work setting. This role might be appropriate when studying persons who are difficult to access. The second type is ‘active participation’. You have gained access to a particular setting and observed the group under study. You can move around at will and can observe in detail and depth and in different situations. The third role is ‘moderate participation’. You do not actually work in the setting you wish to study but are located there as a researcher. You might adopt this role when you are not affiliated to the care setting you wish to study. The fourth role is that of the ‘complete observer’, in which you merely observe (bystander role) and do not participate in the setting at all. However, you cannot perform any observations without access to the care setting. Such access might be easily obtained when you collect data by observations in your own primary care setting. In some cases, you might observe other care settings, which are relevant to primary care, for instance observing the discharge procedure for vulnerable elderly people from hospital to primary care.
How do I perform observations? It is important to decide what to focus on in each individual observation. The focus of observations is important because you can never observe everything, and you can only observe each situation once. Your focus might differ between observations. Each observation should provide you with answers regarding ‘Who do you observe?’, ‘What do you observe’, ‘Where does the observation take place?’, ‘When does it take place?’, ‘How does it happen?’, and ‘Why does it happen as it happens?’ Observations are not static but proceed in three stages: descriptive, focused, and selective. Descriptive means that you observe, on the basis of general questions, everything that goes on in the setting. Focused observation means that you observe certain situations for some time, with some areas becoming more prominent. Selective means that you observe highly specific issues only. For example, if you want to observe the discharge procedure for vulnerable elderly people from hospitals to general practice, you might begin with broad observations to get to know the general procedure. This might involve observing several different patient situations. You might find that the involvement of primary care nurses deserves special attention, so you might then focus on the roles of hospital staff and primary care nurses, and their interactions. Finally, you might want to observe only the specific situations where hospital staff and primary care nurses exchange information. You take field notes from all these observations and add your own reflections on the situations you observed. You jot down words, whole sentences or parts of situations, and
30
Advanced Techniques for Collecting Statistical Data
your reflections on a piece of paper. After the observations, the field notes need to be worked out and transcribed immediately to be able to include detailed descriptions. Box 3. Further reading on interviews and focus group discussion.
Box 4. Qualitative data analysis.
What are the general features of an interview? Interviews involve interactions between the interviewer(s) and the respondent(s) based on interview questions. Individual, or face-to-face, interviews should be distinguished from focus group discussions. The interview questions are written down in an interview guide [7] for individual interviews or a questioning route [8] for focus group discussions, with questions focusing on the phenomenon under study. The sequence of the questions is pre-determined. In individual interviews, the sequence depends on the respondents and how the interviews unfold. During the interview, as the conversation evolves, you go back and forth through the sequence of questions. It should be a dialogue, not a strict question–answer interview. In a focus group discussion, the sequence is intended to facilitate the interaction between the participants, and you might adapt the sequence depending on how their discussion evolves. Working with an interview
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
31
guide or questioning route enables you to collect information on specific topics from all participants. You are in control in the sense that you give direction to the interview, while the participants are in control of their answers. However, you need to be open-minded to recognize that some relevant topics for participants may not have been covered in your interview guide or questioning route, and need to be added. During the data collection process, you develop the interview guide or questioning route further and revise it based on the analysis. The interview guide and questioning route might include open and general as well as subordinate or detailed questions, probes and prompts. Probes are exploratory questions, for example, ‘Can you tell me more about this?’ or ‘Then what happened?’ Prompts are words and signs to encourage participants to tell more. Examples of stimulating prompts are eye contact, leaning forward and open body language. Box 5. Further reading on qualitative analysis. Ethnography
• Atkinson P, Coffey A, Delamount S, Lofland J, Lofmand L. Handbook of ethnography. Sage: Thousand Oaks (CA); 2001. • Spradley J. The ethnographic interview. Holt Rinehart & Winston: New York (NY); 1979. • Spradley J. Participant observation. Holt Rinehart & Winston: New York (NY); 1980.
Phenomenology
• Colaizzi PF. Psychological research as the phenomenologist views it. In: Valle R, King M, editors. Essential phenomenological alternative for psychology. New York (NY): Oxford University Press; 1978. p. 41-78. • Smith J.A, Flowers P, Larkin M. Interpretative phenomenological analysis. Theory, method and research. Sage: London; 2010.
Grounded theory
• Charmaz K. Constructing grounded theory. 2nd ed. Sage: Thousand Oaks (CA); 2014. • Corbin J, Strauss A. Basics of qualitative research. Techniques and procedures for developing grounded theory. Sage: Los Angeles (CA); 2008.
Content analysis
• Elo S, Kääriäinen M, Kanste O, Pölkki T, Utriainen K, Kyngäs H. Qualitative Content Analysis: a focus on trustworthiness. Sage Open 2014: 1–10. DOI: 10.1177/2158244014522633. • Elo S. Kyngäs A. The qualitative content analysis process. J Adv Nurs. 2008; 62: 107–115. • Hsieh HF. Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005; 15: 1277–1288.
32
Advanced Techniques for Collecting Statistical Data
What is a face-to-face interview? A face-to-face interview is an individual interview, that is, a conversation between participant and interviewer. Interviews can focus on past or present situations, and on personal issues. Most qualitative studies start with open interviews to get a broad ‘picture’ of what is going on. You should not provide a great deal of guidance and avoid influencing the answers to fit ‘your’ point of view, as you want to obtain the participant’s own experiences, perceptions, thoughts, and feelings. You should encourage the participants to speak freely. As the interview evolves, your subsequent major and subordinate questions become more focused. A face-to-face or individual interview might last between 30 and 90 min. Most interviews are semi-structured [3]. To prepare an interview guide to enhance that a set of topics will be covered by every participant, you might use a framework for constructing a semi-structured interview guide [10]: (1) identify the prerequisites to use a semi-structured interview and evaluate if a semi-structured interview is the appropriate data collection method; (2) retrieve and utilize previous knowledge to gain a comprehensive and adequate understanding of the phenomenon under study; (3) formulate a preliminary interview guide by operationalizing the previous knowledge; (4) pilot-test the preliminary interview guide to confirm the coverage and relevance of the content and to identify the need for reformulation of questions; (5) complete the interview guide to collect rich data with a clear and logical guide. The first few minutes of an interview are decisive. The participant wants to feel at ease before sharing his or her experiences. In a semi-structured interview, you would start with open questions related to the topic, which invite the participant to talk freely. The questions aim to encourage participants to tell their personal experiences, including feelings and emotions and often focus on a particular experience or specific events. As you want to get as much detail as possible, you also ask follow-up questions or encourage telling more details by using probes and prompts or keeping a short period of silence [6]. You first ask what and why questions and then how questions. You need to be prepared for handling problems you might encounter, such as gaining access, dealing with multiple formal and informal gatekeepers, negotiating space and privacy for recording data, socially desirable answers from participants, reluctance of participants to tell their story, deciding on
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
33
the appropriate role (emotional involvement), and exiting from fieldwork prematurely.
What is a focus group discussion and when can I use it? A focus group discussion is a way to gather together people to discuss a specific topic of interest. The people participating in the focus group discussion share certain characteristics, e.g., professional background, or share similar experiences, e.g., having diabetes. You use their interaction to collect the information you need on a particular topic. To what depth of information the discussion goes depends on the extent to which focus group participants can stimulate each other in discussing and sharing their views and experiences. Focus group participants respond to you and to each other. Focus group discussions are often used to explore patients’ experiences of their condition and interactions with health professionals, to evaluate programmes and treatment, to gain an understanding of health professionals’ roles and identities, to examine the perception of professional education, or to obtain perspectives on primary care issues. A focus group discussion usually lasts 90–120 mins. You might use guidelines for developing a questioning route [9]: (1) brainstorm about possible topics you want to cover; (2) sequence the questioning: arrange general questions first, and then, more specific questions, and ask positive questions before negative questions; (3) phrase the questions: use open-ended questions, ask participants to think back and reflect on their personal experiences, avoid asking ‘why’ questions, keep questions simple and make your questions sound conversational, be careful about giving examples; (4) estimate the time for each question and consider: the complexity of the question, the category of the question, level of participant’s expertise, the size of the focus group discussion, and the amount of discussion you want related to the question; (5) obtain feedback from others (peers); (6) revise the questions based on the feedback; and (7) test the questions by doing a mock focus group discussion. All questions need to provide an answer to the phenomenon under study. You need to be prepared to manage difficulties as they arise, for example, dominant participants during the discussion, little or no interaction and discussion between participants, participants who have difficulties sharing their real feelings about sensitive topics with others, and participants who behave differently when they are observed.
34
Advanced Techniques for Collecting Statistical Data
How should I compose a focus group and how many participants are needed? The purpose of the focus group discussion determines the composition. Smaller groups might be more suitable for complex (and sometimes controversial) topics. Also, smaller focus groups give the participants more time to voice their views and provide more detailed information, while participants in larger focus groups might generate greater variety of information. In composing a smaller or larger focus group, you need to ensure that the participants are likely to have different viewpoints that stimulate the discussion. For example, if you want to discuss the management of obesity in a primary care district, you might want to have a group composed of professionals who work with these patients but also have a variety of backgrounds, e.g. GPs, community nurses, practice nurses in general practice, school nurses, midwives or dieticians. Focus groups generally consist of 6–12 participants. Careful time management is important, since you have to determine how much time you want to devote to answering each question, and how much time is available for each individual participant. For example, if you have planned a focus group discussion lasting 90 min. with eight participants, you might need 15 min. for the introduction and the concluding summary. This means you have 75 min. for asking questions, and if you have four questions, this allows a total of 18 min. of speaking time for each question. If all eight respondents participate in the discussion, this boils down to about two minutes of speaking time per respondent per question.
How can I use new media to collect qualitative data? New media are increasingly used for collecting qualitative data, for example, through online observations, online interviews and focus group discussions, and in analysis of online sources. Data can be collected synchronously or asynchronously, with text messaging, video conferences, video calls or immersive virtual worlds or games, etcetera. Qualitative research moves from ‘virtual’ to ‘digital’. Virtual means those approaches that import traditional data collection methods into the online environment and digital means those approaches take advantage of the unique characteristics and capabilities of the Internet for research [10]. New media can also be applied. See Box 3 for further reading on interview and focus group discussion.
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
35
ANALYSIS Can I wait with my analysis until all data have been collected? You cannot wait with the analysis, because an iterative approach and emerging design are at the heart of qualitative research. This involves a process whereby you move back and forth between sampling, data collection and data analysis to accumulate rich data and interesting findings. The principle is that what emerges from data analysis will shape subsequent sampling decisions. Immediately after the very first observation, interview or focus group discussion, you have to start the analysis and prepare your field notes.
Why is a good transcript so important? First, transcripts of audiotaped interviews and focus group discussions and your field notes constitute your major data sources. Trained and well-instructed transcribers preferably make transcripts. Usually, e.g., in ethnography, phenomenology, grounded theory, and content analysis, data are transcribed verbatim, which means that recordings are fully typed out, and the transcripts are accurate and reflect the interview or focus group discussion experience. Most important aspects of transcribing are the focus on the participants’ words, transcribing all parts of the audiotape, and carefully revisiting the tape and rereading the transcript. In conversation analysis non-verbal actions such as coughing, the lengths of pausing and emphasizing, tone of voice need to be described in detail using a formal transcription system (best known are G. Jefferson’s symbols). To facilitate analysis, it is essential that you ensure and check that transcripts are accurate and reflect the totality of the interview, including pauses, punctuation and non-verbal data. To be able to make sense of qualitative data, you need to immerse yourself in the data and ‘live’ the data. In this process of incubation, you search the transcripts for meaning and essential patterns, and you try to collect legitimate and insightful findings. You familiarize yourself with the data by reading and rereading transcripts carefully and conscientiously, in search for deeper understanding. Are there differences between the analyses in ethnography, phenomenology, grounded theory, and content analysis? Ethnography, phenomenology, and grounded theory each have different analytical approaches, and you should be aware that each of these
36
Advanced Techniques for Collecting Statistical Data
approaches has different schools of thought, which may also have integrated the analytical methods from other schools (Box 4). When you opt for a particular approach, it is best to use a handbook describing its analytical methods, as it is better to use one approach consistently than to ‘mix up’ different schools. In general, qualitative analysis begins with organizing data. Large amounts of data need to be stored in smaller and manageable units, which can be retrieved and reviewed easily. To obtain a sense of the whole, analysis starts with reading and rereading the data, looking at themes, emotions and the unexpected, taking into account the overall picture. You immerse yourself in the data. The most widely used procedure is to develop an inductive coding scheme based on actual data [11]. This is a process of open coding, creating categories and abstraction. In most cases, you do not start with a predefined coding scheme. You describe what is going on in the data. You ask yourself, what is this? What does it stand for? What else is like this? What is this distinct from? Based on this close examination of what emerges from the data you make as many labels as needed. Then, you make a coding sheet, in which you collect the labels and, based on your interpretation, cluster them in preliminary categories. The next step is to order similar or dissimilar categories into broader higher order categories. Each category is named using content-characteristic words. Then, you use abstraction by formulating a general description of the phenomenon under study: subcategories with similar events and information are grouped together as categories and categories are grouped as main categories. During the analysis process, you identify ‘missing analytical information’ and you continue data collection. You reread, recode, re-analyse and re-collect data until your findings provide breadth and depth. Throughout the qualitative study, you reflect on what you see or do not see in the data. It is common to write ‘analytic memos’ [3], write-ups or minianalyses about what you think you are learning during the course of your study, from designing to publishing. They can be a few sentences or pages, whatever is needed to reflect upon: open codes, categories, concepts, and patterns that might be emerging in the data. Memos can contain summaries of major findings and comments and reflections on particular aspects. In ethnography, analysis begins from the moment that the researcher sets foot in the field. The analysis involves continually looking for patterns in the behaviours and thoughts of the participants in everyday life, in order to obtain an understanding of the culture under study. When comparing
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
37
one pattern with another and analysing many patterns simultaneously, you may use maps, flow charts, organizational charts and matrices to illustrate the comparisons graphically. The outcome of an ethnographic study is a narrative description of a culture. In phenomenology, analysis aims to describe and interpret the meaning of an experience, often by identifying essential subordinate and major themes. You search for common themes featuring within an interview and across interviews, sometimes involving the study participants or other experts in the analysis process. The outcome of a phenomenological study is a detailed description of themes that capture the essential meaning of a ‘lived’ experience. Grounded theory generates a theory that explains how a basic social problem that emerged from the data is processed in a social setting. Grounded theory uses the ‘constant comparison’ method, which involves comparing elements that are present in one data source (e.g., an interview) with elements in another source, to identify commonalities. The steps in the analysis are known as open, axial and selective coding. Throughout the analysis, you document your ideas about the data in methodological and theoretical memos. The outcome of a grounded theory study is a theory. Descriptive generic qualitative research is defined as research designed to produce a low inference description of a phenomenon [12]. Although Sandelowski maintains that all research involves interpretation, she has also suggested that qualitative description attempts to minimize inferences made in order to remain ‘closer’ to the original data [12]. Descriptive generic qualitative research often applies content analysis. Descriptive content analysis studies are not based on a specific qualitative tradition and are varied in their methods of analysis. The analysis of the content aims to identify themes, and patterns within and among these themes. An inductive content analysis [11] involves breaking down the data into smaller units, coding and naming the units according to the content they present, and grouping the coded material based on shared concepts. They can be represented by clustering in treelike diagrams. A deductive content analysis [11] uses a theory, theoretical framework or conceptual model to analyse the data by operationalizing them in a coding matrix. An inductive content analysis might use several techniques from grounded theory, such as open and axial coding and constant comparison. However, note that your findings are merely a summary of categories, not a grounded theory.
38
Advanced Techniques for Collecting Statistical Data
Analysis software can support you to manage your data, for example by helping to store, annotate and retrieve texts, to locate words, phrases and segments of data, to name and label, to sort and organize, to identify data units, to prepare diagrams and to extract quotes. Still, as a researcher you would do the analytical work by looking at what is in the data, and making decisions about assigning codes, and identifying categories, concepts and patterns. The computer assisted qualitative data analysis (CAQDAS) website provides support to make informed choices between analytical software and courses: http://www.surrey.ac.uk/sociology/research/researchcentres/ caqdas/support/choosing. See Box 5 for further reading on qualitative analysis. The next and final article in this series, Part 4, will focus on trustworthiness and publishing qualitative research [13].
ACKNOWLEDGEMENTS The authors thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.
Series: Practical Guidance to Qualitative Research. Part 3: Sampling...
39
REFERENCES 1. 2.
3.
4.
5.
6.
7. 8. 9.
10. 11. 12. 13.
Moser A, Korstjens I.. Series: practical guidance to qualitative research. Part 1: Introduction. Eur J Gen Pract. 2017;23:271–273. Korstjens I, Moser A.. Series: Practical guidance to qualitative research. Part 2: Context, research questions and designs. Eur J Gen Pract. 2017;23:274–279. Polit DF, Beck CT.. Nursing research: Generating and assessing evidence for nursing practice. 10th ed. Philadelphia (PA): Lippincott, Williams & Wilkins; 2017. Moser A, van der Bruggen H, Widdershoven G.. Competency in shaping one’s life: Autonomy of people with type 2 diabetes mellitus in a nurse-led, shared-care setting; A qualitative study. Int J Nurs Stud. 2006;43:417–427. Moser A, Korstjens I, van der Weijden T, et al.. Patient’s decision making in selecting a hospital for elective orthopaedic surgery. J Eval Clin Pract. 2010;16:1262–1268. Bonevski B, Randell M, Paul C, et al.. Reaching the hard-to-reach: a systematic review of strategies for improving health and medical research with socially disadvantaged groups. BMC Med Res Methodol. 2014;14:42. Brinkmann S, Kvale S.. Interviews. Learning the craft of qualitative research interviewing. 3rd ed. London (UK): Sage; 2014. Kruger R, Casey M.. Focus groups: A practical guide for applied research. Thousand Oaks (CA): Sage; 2015. Kallio H, Pietilä AM, Johnson M, et al.. Systematic methodological review: developing a framework for a qualitative semi-structured interview guide. J Adv Nurs. 2016;72:2954–2965. Salmons J. Qualitative online interviews. 2nd ed London (UK): Sage; 2015. Elo S, Kyngäs A.. The qualitative content analysis process. J Adv Nurs. 2008;62:107–115. Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. 2010;23:334–340. Korstjens I, Moser A.. Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. Eur J Gen Pract. 2018;24 DOI: 10.1080/13814788.2017.1375092
Chapter
SERIES: PRACTICAL GUIDANCE TO QUALITATIVE RESEARCH. PART 4: TRUSTWORTHINESS AND PUBLISHING
4
Irene Korstjensa and Albine Moserb,c Faculty of Health Care, Research Centre for Midwifery Science, Zuyd University of Applied Sciences, Maastricht, The Netherlands; a
Faculty of Health Care, Research Centre Autonomy and Participation of Chronically Ill People, Zuyd University of Applied Sciences, Heerlen, The Netherlands;
b
Faculty of Health, Medicine and Life Sciences, Department of Family Medicine, Maastricht University, Maastricht, The Netherlands c
ABSTRACT In the course of our supervisory work over the years we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting highCitation: (APA): Korstjens, I., & Moser, A. (2018). Series: Practical guidance to qualitative research. Part 4: Trustworthiness and publishing. European Journal of General Practice, 24(1), 120-124. (6 pages) Copyright: © 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/).
Advanced Techniques for Collecting Statistical Data
42
quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The first article provides an introduction to this series. The second article focused on context, research questions and designs. The third article focused on sampling, data collection and analysis. This fourth article addresses FAQs about trustworthiness and publishing. Quality criteria for all qualitative research are credibility, transferability, dependability, and confirmability. Reflexivity is an integral part of ensuring the transparency and quality of qualitative research. Writing a qualitative research article reflects the iterative nature of the qualitative research process: data analysis continues while writing. A qualitative research article is mostly narrative and tends to be longer than a quantitative paper, and sometimes requires a different structure. Editors essentially use the criteria: is it new, is it true, is it relevant? An effective cover letter enhances confidence in the newness, trueness and relevance, and explains why your study required a qualitative design. It provides information about the way you applied quality criteria or a checklist, and you can attach the checklist to the manuscript. Keywords: General practice/family medicine, general, qualitative designs and methods, trustworthiness, reflexivity, publishing Key points on trustworthiness and publishing • • •
• •
The quality criteria for all qualitative research are credibility, transferability, dependability, and confirmability. In addition, reflexivity is an integral part of ensuring the transparency and quality of qualitative research. Writing a qualitative article reflects the iterative nature of the qualitative research process: continuous data analysis continues with simultaneous fine-tuning. Editors essentially use the criteria: is it new, is it true, and is it relevant? An effective cover letter enhances confidence in the newness, trueness and relevance, and explains why your study required a qualitative design.
Series: Practical Guidance to Qualitative Research. Part 4...
43
INTRODUCTION This article is the fourth and last in a series of four articles aiming to provide practical guidance for qualitative research. In an introductory paper, we have described the objective, nature and outline of the series [1]. Part 2 of the series focused on context, research questions and design of qualitative research [2], whereas Part 3 concerned sampling, data collection and analysis [3]. In this paper Part 4, we address frequently asked questions (FAQs) about two overarching themes: trustworthiness and publishing.
TRUSTWORTHINESS What are the quality criteria for qualitative research? The same quality criteria apply to all qualitative designs, including the ‘big three’ approaches. Quality criteria used in quantitative research, e.g. internal validity, generalizability, reliability, and objectivity, are not suitable to judge the quality of qualitative research. Qualitative researchers speak of trustworthiness, which simply poses the question ‘Can the findings to be trusted?’ [4]. Several definitions and criteria of trustworthiness exist (see Box 1) [2], but the best-known criteria are credibility, transferability, dependability, and confirmability as defined by Lincoln and Guba [4]. Box 1: Trustworthiness: definitions of quality criteria in qualitative research. Based on Lincoln and Guba [4]. Credibility
The confidence that can be placed in the truth of the research findings. Credibility establishes whether the research findings represent plausible information drawn from the participants’ original data and is a correct interpretation of the participants’ original views.
Transferability
The degree to which the results of qualitative research can be transferred to other contexts or settings with other respondents. The researcher facilitates the transferability judgment by a potential user through thick description.
Dependability
The stability of findings over time. Dependability involves participants’ evaluation of the findings, interpretation and recommendations of the study such that all are supported by the data as received from participants of the study.
Confirmability
The degree to which the findings of the research study could be confirmed by other researchers. Confirmability is concerned with establishing that data and interpretations of the findings are not figments of the inquirer’s imagination, but clearly derived from the data.
Reflexivity
The process of critical self-reflection about oneself as researcher (own biases, preferences, preconceptions), and the research relationship (relationship to the respondent, and how the relationship affects participant’s answers to questions).
44
Advanced Techniques for Collecting Statistical Data
What is credibility and what strategies can be used to ensure it? Credibility is the equivalent of internal validity in quantitative research and is concerned with the aspect of truth-value [4]. Strategies to ensure credibility are prolonged engagement, persistent observation, triangulation and member check (Box 2). When you design your study, you also determine which of these strategies you will use, because not all strategies might be suitable. For example, a member check of written findings might not be possible for study participants with a low level of literacy. Let us give an example of the possible use of strategies to ensure credibility. A team of primary care researchers studied the process by which people with type 2 diabetes mellitus try to master diabetes self-management [6]. They used the grounded theory approach, and their main finding was an explanatory theory. The researchers ensured credibility by using the following strategies. Box 2. Definition of strategies to ensure trustworthiness in qualitative research. Based on Lincoln and Guba [4]; Sim and Sharp [5]. Criterion
Strategy
Definition
Credibility
Prolonged engagement
Lasting presence during observation of long interviews or long-lasting engagement in the field with participants. Investing sufficient time to become familiar with the setting and context, to test for misinformation, to build trust, and to get to know the data to get rich data.
Persistent observation
Identifying those characteristics and elements that are most relevant to the problem or issue under study, on which you will focus in detail.
Triangulation
Using different data sources, investigators and methods of data collection. • Data triangulation refers to using multiple data sources in time (gathering data in different times of the day or at different times in a year), space (collecting data on the same phenomenon in multiples sites or test for cross-site consistency) and person (gathering data from different types or level of people e.g. individuals, their family members and clinicians). • Investigator triangulation is concerned with using two ore researchers to make coding, analysis and interpretation decisions. • Method triangulation means using multiple methods of data collection.
Member check
Feeding back data, analytical categories, interpretations and conclusions to members of those groups from whom the data were originally obtained. It strengthens the data, especially because researcher and respondents look at the data with different eyes.
Thick description
Describing not just the behaviour and experiences, but their context as well, so that the behaviour and experiences become meaningful to an outsider.
Transferability
Series: Practical Guidance to Qualitative Research. Part 4...
45
Dependability and confirmability
Audit trail
Transparently describing the research steps taken from the start of a research project to the development and reporting of the findings. The records of the research path are kept throughout the study.
Reflexivity
Diary
Examining one’s own conceptual lens, explicit and implicit assumptions, preconceptions and values, and how these affect research decisions in all phases of qualitative studies.
Prolonged engagement. Several distinct questions were asked regarding topics related to mastery. Participants were encouraged to support their statements with examples, and the interviewer asked follow-up questions. The researchers studied the data from their raw interview material until a theory emerged to provide them with the scope of the phenomenon under study. Triangulation. Triangulation aims to enhance the process of qualitative research by using multiple approaches [7]. Methodological triangulation was used by gathering data by means of different data collection methods such as in-depth interviews, focus group discussions and field notes. Investigator triangulation was applied by involving several researchers as research team members, and involving them in addressing the organizational aspects of the study and the process of analysis. Data were analysed by two different researchers. The first six interviews were analysed by them independently, after which the interpretations were compared. If their interpretations differed, they discussed them until the most suitable interpretation was found, which best represented the meaning of the data. The two researchers held regular meetings during the process of analysis (after analysing every third data set). In addition, regular analytical sessions were held with the research team. Data triangulation was secured by using the various data sets that emerged throughout the analysis process: raw material, codes, concepts and theoretical saturation. Persistent observation. Developing the codes, the concepts and the core category helped to examine the characteristics of the data. The researchers constantly read and reread the data, analysed them, theorized about them and revised the concepts accordingly. They recoded and relabelled codes, concepts and the core category. The researchers studied the data until the final theory provided the intended depth of insight. Member check. All transcripts of the interviews and focus group discussions were sent to the participants for feedback. In addition, halfway through the study period, a meeting was held with those who had participated in either the interviews or the focus group discussions, enabling them to correct the interpretation and challenge what they perceived to be ‘wrong’
46
Advanced Techniques for Collecting Statistical Data
interpretations. Finally, the findings were presented to the participants in another meeting to confirm the theory.
What does transferability mean and who makes a ‘transferability judgement’? Transferability concerns the aspect of applicability [4]. Your responsibility as a researcher is to provide a ‘thick description’ of the participants and the research process, to enable the reader to assess whether your findings are transferable to their own setting; this is the so-called transferability judgement. This implies that the reader, not you, makes the transferability judgment because you do not know their specific settings. In the aforementioned study on self-management of diabetes, the researchers provided a rich account of descriptive data, such as the context in which the research was carried out, its setting, sample, sample size, sample strategy, demographic, socio-economic, and clinical characteristics, inclusion and exclusion criteria, interview procedure and topics, changes in interview questions based on the iterative research process, and excerpts from the interview guide.
What is the difference between dependability and confirmability and why is an audit trail needed? Dependability includes the aspect of consistency [4]. You need to check whether the analysis process is in line with the accepted standards for a particular design. Confirmability concerns the aspect of neutrality [4]. You need to secure the inter-subjectivity of the data. The interpretation should not be based on your own particular preferences and viewpoints but needs to be grounded in the data. Here, the focus is on the interpretation process embedded in the process of analysis. The strategy needed to ensure dependability and confirmability is known as an audit trail. You are responsible for providing a complete set of notes on decisions made during the research process, research team meetings, reflective thoughts, sampling, research materials adopted, emergence of the findings and information about the data management. This enables the auditor to study the transparency of the research path. In the aforementioned study of diabetes self-management, a universitybased auditor examined the analytical process, the records and the minutes of meetings for accuracy, and assessed whether all analytical techniques of the grounded theory methodology had been used accordingly. This auditor
Series: Practical Guidance to Qualitative Research. Part 4...
47
also reviewed the analysis, i.e. the descriptive, axial and selective codes, to see whether they followed from the data (raw data, analysis notes, coding notes, process notes, and report) and grounded in the data. The auditor who performed the dependability and confirmability audit was not part of the research team but an expert in grounded theory. The audit report was shared with all members of the research team.
Why is reflexivity an important quality criterion? As a qualitative researcher, you have to acknowledge the importance of being self-aware and reflexive about your own role in the process of collecting, analysing and interpreting the data, and in the pre-conceived assumptions, you bring to your research [8]. Therefore, your interviews, observations, focus group discussions and all analytical data need to be supplemented with your reflexive notes. In the aforementioned study of diabetes selfmanagement, the reflexive notes for an interview described the setting and aspects of the interview that were noted during the interview itself and while transcribing the audio tape and analysing the transcript. Reflexive notes also included the researcher’s subjective responses to the setting and the relationship with the interviewees.
PUBLISHING How do I report my qualitative study? The process of writing up your qualitative study reflects the iterative process of performing qualitative research. As you start your study, you make choices about the design, and as your study proceeds, you develop your design further. The same applies to writing your manuscript. First, you decide its structure, and during the process of writing, you adapt certain aspects. Moreover, while writing you are still analysing and fine-tuning your findings. The usual structure of articles is a structured abstract with subheadings, followed by the main text, structured in sections labelled Introduction-MethodsResults-Discussion. You might apply this structure loosely, for example renaming Results as Findings, but sometimes your specific study design requires a different structure. For example, an ethnographic study might use a narrative abstract and then start by describing a specific case, or combine the Findings and Discussion sections. A qualitative article is usually much longer (5000–7000 words) than quantitative articles, which often present their results in tables. You might present quantified characteristics of your
48
Advanced Techniques for Collecting Statistical Data
participants in tables or running text, and you are likely to use boxes to present your interview guide or questioning route, or an overview of the main findings in categories, subcategories and themes. Most of your article is running text, providing a balanced presentation. You provide a thick description of the participants and the context, transparently describe and reflect on your methods, and do justice to the richness of your qualitative findings in reporting, interpreting and discussing them. Thus, the Methods and Findings sections will be much longer than in a quantitative paper. The difference between reporting quantitative and qualitative research becomes most visible in the Results section. Quantitative articles have a strict division between the Results section, which presents the evidence, and the Discussion section. In contrast, the Findings section in qualitative papers consists mostly of synthesis and interpretation, often with links to empirical data. Quantitative and qualitative researchers alike, however, need to be concise in presenting the main findings to answer the research question, and avoid distractions. Therefore, you need to make choices to provide a comprehensive and balanced representation of your findings. Your main findings may consist, for example, of interpretations, relationships and themes, and your Findings section might include the development of a theory or model, or integration with earlier research or theory. You present evidence to substantiate your analytic findings. You use quotes or citations in the text, or field notes, text excerpts or photographs in boxes to illustrate and visualize the variety and richness of the findings. Before you start preparing your article, it is wise to examine first the journal of your choice. You need to check its guidelines for authors and recommended sources for reference style, ethics, etc., as well as recently accepted qualitative manuscripts. More and more journals also refer to quality criteria lists for reporting qualitative research, and ask you to upload the checklist with your submission. Two of these checklists are available at http://www.equator-network.org/reporting-guidelines.
How do I select a potential journal for publishing my research? Selecting a potential journal for publishing qualitative articles is not much different from the procedure used for quantitative articles. First, you consider your potential public and the healthcare settings, health problems, field, or research methodology you are focusing on. Next, you look for journals in the Journal Citation Index of Web of Science, consult other researchers and study the potential journals’ aims, scopes, and author guidelines. This also
Series: Practical Guidance to Qualitative Research. Part 4...
49
enables you to find out how open these journals are to publishing qualitative research and accepting articles with different designs, structures and lengths. If you are unsure whether the journal of your choice would accept qualitative research, you might contact the Editor in Chief. Lastly, you might look in your top three journals for qualitative articles, and try to decide how your manuscript would fit in. The author guidelines and examples of manuscripts will support you during your writing, and your top three offers alternatives in case you need to turn to another journal.
What are the journal editors’ considerations in accepting a qualitative manuscript? Your article should effectively present high-quality research and should adhere to the journal’s guidelines. Editors essentially use the same criteria for qualitative articles as for quantitative articles: Is it new, it is true, is it relevant? However, editors may use—implicitly or explicitly—the levelof-evidence pyramid, with qualitative research positioned in the lower ranks. Moreover, many medical journal editors will be more familiar with quantitative designs than with qualitative work. Therefore, you need to put some extra effort in your cover letter to the editor, to enhance their confidence in the newness, trueness and relevance, and the quality of your work. It is of the utmost importance that you explain in your cover letter why your study required a qualitative design, and probably more words than usual. If you need to deviate from the usual structure, you have to explain why. To enhance confidence in the quality of your work, you should explain how you applied quality criteria or refer to the checklist you used (Boxes 2 and 3). You might even attach the checklist as additional information to the manuscript. You might also request that the Editor-inChief invites at least one reviewer who is familiar with qualitative research. Box 3. Quality criteria checklists for reporting qualitative research. Based on O’Brien et al. [9]; Tong et al. [10]. Standards for reporting qualitative research (SRQR)
Consolidated criteria for reporting qualitative research (COREQ)
All aspects of qualitative studies.
Qualitative studies focusing on in-depth interviews and focus groups.
21 items for: title, abstract, introduction, methods, results/findings, discussion, conflicts of interest, and funding.
32 items for: research team and reflexivity, study design, data analysis, and reporting.
50
Advanced Techniques for Collecting Statistical Data
ACKNOWLEDGEMENTS The authors wish to thank the following junior researchers who have been participating for the last few years in the so-called ‘Think tank on qualitative research’ project, a collaborative project between Zuyd University of Applied Sciences and Maastricht University, for their pertinent questions: Erica Baarends, Jerome van Dongen, Jolanda Friesen-Storms, Steffy Lenzen, Ankie Hoefnagels, Barbara Piskur, Claudia van Putten-Gamel, Wilma Savelberg, Steffy Stans, and Anita Stevens. The authors are grateful to Isabel van Helmond, Joyce Molenaar and Darcy Ummels for proofreading our manuscripts and providing valuable feedback from the ‘novice perspective’.
Series: Practical Guidance to Qualitative Research. Part 4...
51
REFERENCES 1.
Moser A, Korstjens I.. Series: practical guidance to qualitative research. Part 1: Introduction. Eur J Gen Pract. 2017;23:271–273. 2. Korstjens I, Moser A.. Series: practical guidance to qualitative research. Part 2: Context, research questions and designs. Eur J Gen Pract. 2017;23:274–279. 3. Moser A, Korstjens I.. Series: practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. Eur J Gen Pract. 2018;24. DOI: 10.1080/13814788.2017.1375091 4. Lincoln YS, Guba EG.. Naturalistic inquiry. California: Sage Publications; 1985. [Google Scholar] 5. Tracy SJ. Qualitative quality: eight ‘big-tent’ criteria for excellent qualitative research. Qual Inq. 2010;16:837–851. 6. Moser A, van der Bruggen H, Widdershoven G, et al.. Self-management of type 2 diabetes mellitus: a qualitative investigation from the perspective of participants in a nurse-led, shared-care programme in the Netherlands. BMC Public Health. 2008;8:91. 7. Sim J, Sharp K.. A critical appraisal of the role of triangulation in nursing research. Int J Nurs Stud. 1998;35:23–31. 8. Mauthner NS, Doucet A.. Reflexive accounts and accounts of reflexivity in qualitative data. Soc 2003;37:413–431. 9. O’Brien BC, Harris IB, Beckman TJ, et al.. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89:1245–1251. 10. Tong A, Sainsbury P, Craig J.. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19:349–357.
Chapter
PARTICIPANT OBSERVATION AS A DATA COLLECTION METHOD
5
Barbara B. Kawulich University of West Georgia Educational Leadership and Professional Studies Department1601 Maple Street, Room 153, Education Annex Carrollton, GA 30118, USA
ABSTRACT Observation, particularly participant observation, has been used in a variety of disciplines as a tool for collecting data about people, processes, and cultures in qualitative research. This paper provides a look at various definitions of participant observation, the history of its use, the purposes for which it is used, the stances of the observer, and when, what, and how to observe. Information on keeping field notes and writing them up is also discussed, along with some exercises for teaching observation techniques to researchers-in-training. Citation: (APA): Kawulich, B. B. (2005, May). Participant observation as a data collection method. In Forum qualitative sozialforschung/forum: Qualitative social research, Vol. 6, No. 2. (28 pages) Copyright: © 2005 Barbara B. Kawulich. This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/ by/4.0/).
54
Advanced Techniques for Collecting Statistical Data
Keywords: participant observation, qualitative research methods, field notes
INTRODUCTION Participant observation, for many years, has been a hallmark of both anthropological and sociological studies. In recent years, the field of education has seen an increase in the number of qualitative studies that include participant observation as a way to collect information. Qualitative methods of data collection, such as interviewing, observation, and document analysis, have been included under the umbrella term of “ethnographic methods” in recent years. The purpose of this paper is to discuss observation, particularly participant observation, as a tool for collecting data in qualitative research studies. Aspects of observation discussed herein include various definitions of participant observation, some history of its use, the purposes for which such observation is used, the stances or roles of the observer, and additional information about when, what, and how to observe. Further information is provided to address keeping field notes and their use in writing up the final story. [1]
DEFINITIONS MARSHALL and ROSSMAN (1989) define observation as “the systematic description of events, behaviors, and artifacts in the social setting chosen for study” (p.79). Observations enable the researcher to describe existing situations using the five senses, providing a “written photograph” of the situation under study (ERLANDSON, HARRIS, SKIPPER, & ALLEN, 1993). DeMUNCK and SOBO (1998) describe participant observation as the primary method used by anthropologists doing fieldwork. Fieldwork involves “active looking, improving memory, informal interviewing, writing detailed field notes, and perhaps most importantly, patience” (DeWALT & DeWALT, 2002, p.vii). Participant observation is the process enabling researchers to learn about the activities of the people under study in the natural setting through observing and participating in those activities. It provides the context for development of sampling guidelines and interview guides (DeWALT & DeWALT, 2002). SCHENSUL, SCHENSUL, and LeCOMPTE (1999) define participant observation as “the process of learning through exposure to or involvement in the day-to-day or routine activities of participants in the researcher setting” (p.91). [2]
Participant Observation as a Data Collection Method
55
BERNARD (1994) adds to this understanding, indicating that participant observation requires a certain amount of deception and impression management. Most anthropologists, he notes, need to maintain a sense of objectivity through distance. He defines participant observation as the process of establishing rapport within a community and learning to act in such a way as to blend into the community so that its members will act naturally, then removing oneself from the setting or community to immerse oneself in the data to understand what is going on and be able to write about it. He includes more than just observation in the process of being a participant observer; he includes observation, natural conversations, interviews of various sorts, checklists, questionnaires, and unobtrusive methods. Participant observation is characterized by such actions as having an open, nonjudgmental attitude, being interested in learning more about others, being aware of the propensity for feeling culture shock and for making mistakes, the majority of which can be overcome, being a careful observer and a good listener, and being open to the unexpected in what is learned (DeWALT & DeWALT, 1998). [3] FINE (2003) uses the term “peopled ethnography” to describe text that provides an understanding of the setting and that describes theoretical implications through the use of vignettes, based on field notes from observations, interviews, and products of the group members. He suggests that ethnography is most effective when one observes the group being studied in settings that enable him/her to “explore the organized routines of behavior” (p.41). FINE, in part, defines “peopled ethnography” as being based on extensive observation in the field, a labor-intensive activity that sometimes lasts for years. In this description of the observation process, one is expected to become a part of the group being studied to the extent that the members themselves include the observer in the activity and turn to the observer for information about how the group is operating. He also indicates that it is at this point, when members begin to ask the observer questions about the group and when they begin to include the observer in the “gossip,” that it is time to leave the field. This process he describes of becoming a part of the community, while observing their behaviors and activities, is called participant observation. [4]
THE HISTORY OF PARTICIPANT OBSERVATION AS A METHOD Participant observation is considered a staple in anthropological studies, especially in ethnographic studies, and has been used as a data collection
56
Advanced Techniques for Collecting Statistical Data
method for over a century. As DeWALT and DeWALT (2002) relate it, one of the first instances of its use involved the work of Frank Hamilton CUSHING, who spent four and a half years as a participant observer with the Zuni Pueblo people around 1879 in a study for the Smithsonian Institution’s Bureau of Ethnology. During this time, CUSHING learned the language, participated in the customs, was adopted by a pueblo, and was initiated into the priesthood. Because he did not publish extensively about this culture, he was criticized as having gone native, meaning that he had lost his objectivity and, therefore, his ability to write analytically about the culture. My own experience conducting research in indigenous communities, which began about ten years ago with my own ethnographic doctoral dissertation on Muscogee (Creek) women’s perceptions of work (KAWULICH, 1998) and has continued in the years since (i.e., KAWULICH, 2004), leads me to believe that, while this may have been the case, it is also possible that he held the Zuni people in such high esteem that he felt it impolitic or irreverent to do so. In my own research, I have been hesitant to write about religious ceremonies or other aspects of indigenous culture that I have observed, for example, for fear of relating information that my participants or other community members might feel should not be shared. When I first began conducting my ethnographic study of the Muscogee culture, I was made aware of several incidents in which researchers were perceived to have taken information they had obtained through interviews or observations and had published their findings without permission of the Creek people or done so without giving proper credit to the participants who had shared their lives with the researchers. [5] A short time later, in 1888, Beatrice Potter WEBB studied poor neighborhoods during the day and returned to her privileged lifestyle at night. She took a job as a rent collector to interact with the people in buildings and offices and took a job as a seamstress in a sweatshop to better understand their lives. Then, in the early 1920s, MALINOWSKI studied and wrote about his participation and observation of the Trobriands, a study BERNARD (1998) calls one of the most cited early discussions of anthropological data collection methods. Around the same time, Margaret MEAD studied the lives of adolescent Samoan girls. MEAD’s approach to data collection differed from that of her mentor, anthropologist Frank BOAS, who emphasized the use of historical texts and materials to document disappearing native cultures. Instead, MEAD participated in the living culture to record their cultural activities, focusing on specific activities, rather than participating in the activities of the culture overall as did MALINOWSKI.
Participant Observation as a Data Collection Method
57
By 1874, the Royal Anthropological Institute of Great Britain had published a manual of methods called Notes and Queries on Anthropology, which was subsequently revised several times until 1971 (BERNARD, 1998). [6] STOCKING (1983, as cited in DeWALT & DeWALT, 2002) divided participant observation as an ethnographic method of data collection into three phases: participation, observation, and interrogation, pointing out that MALINOWSKI and MEAD both emphasized the use of observation and interrogation, but not participation. He suggests that both MEAD and MALINOWSKI held positions of power within the culture that enabled them to collect data from a position of privilege. While ethnographers traditionally tried to understand others by observing them and writing detailed accounts of others’ lives from an outsider viewpoint, more recently, sociologists have taken a more insider viewpoint by studying groups in their own cultures. These sociological studies have brought into question the stance or positioning of the observer and generated more creative approaches to lending voice to others in the presentation of the findings of their studies (GAITAN, 2000). By the 1940s, participant observation was widely used by both anthropologists and sociologists. The previously noted studies were some of the first to use the process of participant observation to obtain data for understanding various cultures and, as such, are considered to be required reading in anthropology classes. [7]
WHY USE OBSERVATION TO COLLECT DATA? Observation methods are useful to researchers in a variety of ways. They provide researchers with ways to check for nonverbal expression of feelings, determine who interacts with whom, grasp how participants communicate with each other, and check for how much time is spent on various activities (SCHMUCK, 1997). Participant observation allows researchers to check definitions of terms that participants use in interviews, observe events that informants may be unable or unwilling to share when doing so would be impolitic, impolite, or insensitive, and observe situations informants have described in interviews, thereby making them aware of distortions or inaccuracies in description provided by those informants (MARSHALL & ROSSMAN, 1995). [8] DeWALT and DeWALT (2002) believe that “the goal for design of research using participant observation as a method is to develop a holistic understanding of the phenomena under study that is as objective and accurate as possible given the limitations of the method” (p.92). They suggest that
Advanced Techniques for Collecting Statistical Data
58
participant observation be used as a way to increase the validity1) of the study, as observations may help the researcher have a better understanding of the context and phenomenon under study. Validity is stronger with the use of additional strategies used with observation, such as interviewing, document analysis, or surveys, questionnaires, or other more quantitative methods. Participant observation can be used to help answer descriptive research questions, to build theory, or to generate or test hypotheses (DeWALT & DeWALT, 2002). [9] When designing a research study and determining whether to use observation as a data collection method, one must consider the types of questions guiding the study, the site under study, what opportunities are available at the site for observation, the representativeness of the participants of the population at that site, and the strategies to be used to record and analyze the data (DeWALT & DeWALT, 2002). [10] Participant observation is a beginning step in ethnographic studies. SCHENSUL, SCHENSUL, and LeCOMPTE (1999) list the following reasons for using participant observation in research: • •
to identify and guide relationships with informants; to help the researcher get the feel for how things are organized and prioritized, how people interrelate, and what are the cultural parameters; • to show the researcher what the cultural members deem to be important in manners, leadership, politics, social interaction, and taboos; • to help the researcher become known to the cultural members, thereby easing facilitation of the research process; and • to provide the researcher with a source of questions to be addressed with participants (p.91). [11] BERNARD (1994) lists five reasons for including participant observation in cultural studies, all of which increase the study’s validity: •
•
It makes it possible to collect different types of data. Being on site over a period of time familiarizes the researcher to the community, thereby facilitating involvement in sensitive activities to which he/she generally would not be invited. It reduces the incidence of “reactivity” or people acting in a certain way when they are aware of being observed.
Participant Observation as a Data Collection Method
• •
•
59
It helps the researcher to develop questions that make sense in the native language or are culturally relevant. It gives the researcher a better understanding of what is happening in the culture and lends credence to one’s interpretations of the observation. Participant observation also enables the researcher to collect both quantitative and qualitative data through surveys and interviews. It is sometimes the only way to collect the right data for one’s study (pp.142-3). [12]
ADVANTAGES AND DISADVANTAGES OF USING PARTICIPANT OBSERVATION DeMUNCK and SOBO (1998) provide several advantages of using participant observation over other methods of data collection. These include that it affords access to the “backstage culture” (p.43); it allows for richly detailed description, which they interpret to mean that one’s goal of describing “behaviors, intentions, situations, and events as understood by one’s informants” is highlighted (p.43); and it provides opportunities for viewing or participating in unscheduled events. DeWALT and DeWALT (2002) add that it improves the quality of data collection and interpretation and facilitates the development of new research questions or hypotheses (p.8). [13] DeMUNCK and SOBO also share several disadvantages of using participation as a method, including that sometimes the researcher may not be interested in what happens out of the public eye and that one must rely on the use of key informants. The MEAD-FREEMAN2) controversy illustrates how different researchers gain different understanding of what they observe, based on the key informant(s) used in the study. Problems related to representation of events and the subsequent interpretations may occur when researchers select key informants who are similar to them or when the informants are community leaders or marginal participants (DeMUNCK & SOBO, 1998). To alleviate this potential bias problem, BERNARD (1994) suggests pretesting informants or selecting participants who are culturally competent in the topic being studied. [14] JOHNSON and SACKETT (1998) discuss participant observation as a source of erroneous description in behavioral research. They note that the information collected by anthropologists is not representative of the culture,
60
Advanced Techniques for Collecting Statistical Data
as much of the data collected by these researchers is observed based on the researcher’s individual interest in a setting or behavior, rather than being representative of what actually happens in a culture. For example, they report that more data has been collected about political/religious activities than about eating/sleeping activities, because the political/religious activities are more interesting to researchers than eating/sleeping activities; yet, the amount of time the cultural members spent on political/religious activities was less than 3%, while the amount of time they spent eating/sleeping was greater than 60%. Such actions skew the description of cultural activities. To alleviate this problem, they advocate the use of systematic observation procedures to incorporate rigorous techniques for sampling and recording behavior that keep researchers from neglecting certain aspects of culture. Their definition of structured observation directs who is observed, when and where they are observed, what is observed, and how the observations are recorded, providing a more quantitative observation than participant observation. [15]
Limitations of Observation Several researchers have noted the limitations involved with using observations as a tool for data collection. For example, DeWALT and DeWALT (2002) note that male and female researchers have access to different information, as they have access to different people, settings, and bodies of knowledge. Participant observation is conducted by a biased human who serves as the instrument for data collection; the researcher must understand how his/her gender, sexuality, ethnicity, class, and theoretical approach may affect observation, analysis, and interpretation. [16] SCHENSUL, SCHENSUL, and LeCOMPTE (1999) refer to participation as meaning almost total immersion in an unfamiliar culture to study others’ lives through the researcher’s participation as a full-time resident or member, though they point out that most observers are not full participants in community life. There are a number of things that affect whether the researcher is accepted in the community, including one’s appearance, ethnicity, age, gender, and class, for example. Another factor they mention that may inhibit one’s acceptance relates to what they call the structural characteristics—that is, those mores that exist in the community regarding interaction and behavior (p.93). Some of the reasons they mention for a researcher’s not being included in activities include a lack of trust, the community’s discomfort with having an outsider there, potential danger to either the community or the researcher, and the community’s lack of funds to
Participant Observation as a Data Collection Method
61
further support the researcher in the research. Some of the ways the researcher might be excluded include the community members’ use of a language that is unfamiliar to the researcher, their changing from one language to another that is not understood by the researcher, their changing the subject when the researcher arrives, their refusal to answer certain questions, their moving away from the researcher to talk out of ear shot, or their failure to invite the researcher to social events. [17] SCHENSUL, SCHENSUL, and LeCOMPTE further point out that all researchers should expect to experience a feeling of having been excluded at some point in the research process, particularly in the beginning. The important thing, they note, is for the researcher to recognize what that exclusion means to the research process and that, after the researcher has been in the community for a while, the community is likely to have accepted the researcher to some degree. [18] Another limitation involved in conducting observations is noted by DeWALT, DeWALT, and WAYLAND (1998). The researcher must determine to what extent he/she will participate in the lives of the participants and whether to intervene in a situation. Another potential limitation they mention is that of researcher bias. They note that, unless ethnographers use other methods than just participant observation, there is likelihood that they will fail to report the negative aspects of the cultural members. They encourage the novice researcher to practice reflexivity at the beginning of one’s research to help him/her understand the biases he/she has that may interfere with correct interpretation of what is observed. Researcher bias is one of the aspects of qualitative research that has led to the view that qualitative research is subjective, rather than objective. According to RATNER (2002), some qualitative researchers believe that one cannot be both objective and subjective, while others believe that the two can coexist, that one’s subjectivity can facilitate understanding the world of others. He notes that, when one reflects on one’s biases, he/she can then recognize those biases that may distort understanding and replace them with those that help him/her to be more objective. In this way, he suggests, the researcher is being respectful of the participants by using a variety of methods to ensure that what he/she thinks is being said, in fact, matches the understanding of the participant. BREUER and ROTH (2003) use a variety of methods for knowledge production, including, for example, positioning or various points of view, different frames of reference, such as special or temporal relativity, perceptual schemata based on experience, and interaction with the social context—understanding that any interaction changes the observed object.
Advanced Techniques for Collecting Statistical Data
62
Using different approaches to data collection and observation, in particular, leads to richer understanding of the social context and the participants therein. [19] SCHENSUL, SCHENSUL, and LeCOMPTE (1999) also suggest that observation is filtered through one’s interpretive frames and that “the most accurate observations are shaped by formative theoretical frameworks and scrupulous attention to detail” (p.95). The quality of the participant observation depends upon the skill of the researcher to observe, document, and interpret what has been observed. It is important in the early stages of the research process for the researcher to make accurate observation field notes without imposing preconceived categories from the researcher’s theoretical perspective, but allow them to emerge from the community under study (see Section 10). [20]
THE STANCES OF THE OBSERVER The degree to which the researcher involves himself/herself in participation in the culture under study makes a difference in the quality and amount of data he/she will be able to collect. GOLD (1958) has provided a description of observer stances that extend Buford JUNKER’s explanation of four theoretical stances for researchers conducting field observations. GOLD relates the four observation stances as follows: •
•
At one extreme is the complete participant, who is a member of the group being studied and who conceals his/her researcher role from the group to avoid disrupting normal activity. The disadvantages of this stance are that the researcher may lack objectivity, the group members may feel distrustful of the researcher when the research role is revealed, and the ethics of the situation are questionable, since the group members are being deceived. In the participant as observer stance, the researcher is a member of the group being studied, and the group is aware of the research activity. In this stance, the researcher is a participant in the group who is observing others and who is interested more in observing than in participating, as his/her participation is a given, since he/ she is a member of the group. This role also has disadvantages, in that there is a trade off between the depth of the data revealed to the researcher and the level of confidentiality provided to the group for the information they provide.
Participant Observation as a Data Collection Method
63
The observer as participant stance enables the researcher to participate in the group activities as desired, yet the main role of the researcher in this stance is to collect data, and the group being studied is aware of the researcher›s observation activities. In this stance, the researcher is an observer who is not a member of the group and who is interested in participating as a means for conducting better observation and, hence, generating more complete understanding of the group›s activities. MERRIAM (1998) points out that, while the researcher may have access to many different people in this situation from whom he/she may obtain information, the group members control the level of information given. As ADLER and ADLER (1994, p.380) note, this «peripheral membership role» enables the researcher to «observe and interact closely enough with members to establish an insider›s identity without participating in those activities constituting the core of group membership.» • The opposite extreme stance from the complete participant is the complete observer, in which the researcher is completely hidden from view while observing or when the researcher is in plain sight in a public setting, yet the public being studied is unaware of being observed. In either case, the observation in this stance is unobtrusive and unknown to participants. [21] Of these four stances, the role providing the most ethical approach to observation is that of the observer as participant, as the researcher’s observation activities are known to the group being studied, yet the emphasis for the researcher is on collecting data, rather than participating in the activity being observed. [22] •
MERRIAM (1998) calls the stance of participant observer a “schizophrenic activity” (p.103), because the researcher participates in the setting under study, but not to the extent that he/she becomes too absorbed to observe and analyze what is happening. The question frequently is asked, should the researcher be concerned about his/her role of participant observer affecting the situation. MERRIAM (1998) suggests that the question is not whether the process of observing affects the situation or the participants, but how the researcher accounts for those effects in explaining the data. Participant observation is more difficult than simply observing without participation in the activity of the setting, since it usually requires that the field notes be jotted down at a later time, after the activity has concluded.
64
Advanced Techniques for Collecting Statistical Data
Yet there are situations in which participation is required for understanding. Simply observing without participating in the action may not lend itself to one’s complete understanding of the activity. [23] DeWALT and DeWALT provide an alternative view of the roles the participant observer may take, by comparing the various stances of observation through membership roles described by both SPRADLEY (1980, pp.58-62) and ADLER and ADLER (1987). SPRADLEY describes the various roles that observers may take, ranging in degree of participation from non-participation (activities are observed from outside the research setting) to passive participation (activities are observed in the setting but without participation in activities) to moderate participation (activities are observed in the setting with almost complete participation in activities) to complete participation (activities are observed in the setting with complete participation in the culture). ADLER and ADLER similarly describe the range of membership roles to include peripheral membership, active membership, and full membership. Those serving in a peripheral membership role observe in the setting but do not participate in activities, while active membership roles denote the researcher’s participation in certain or all activities, and full membership is reflected by fully participating in the culture. The degree to which the researcher may participate may be determined by the researcher or by the community (DeWALT & DeWALT, 2002). [24] Other factors that may affect the degree to which one may participate in the culture include the researcher’s age, gender, class, and ethnicity. One also must consider the limitations of participating in activities that are dangerous or illegal. “The key point is that researchers should be aware of the compromises in access, objectivity, and community expectation that are being made at any particular place along the continuum. Further, in the writing of ethnography, the particular place of the researcher on this continuum should be made clear” (DeWALT & DeWALT, 2002 p.23). [25]
HOW DOES ONE KNOW WHAT TO OBSERVE? MERRIAM (1998) suggests that the most important factor in determining what a researcher should observe is the researcher’s purpose for conducting the study in the first place. “Where to begin looking depends on the research question, but where to focus or stop action cannot be determined ahead of time” (MERRIAM, 1998, p.97). [26]
Participant Observation as a Data Collection Method
65
To help the researcher know what to observe, DeWALT and DeWALT (2002) suggest that he/she study what is happening and why; sort out the regular from the irregular activities; look for variation to view the event in its entirety from a variety of viewpoints; look for the negative cases or exceptions; and, when behaviors exemplify the theoretical purposes for the observation, seek similar opportunities for observation and plan systematic observations of those events/behaviors. Over time, such events may change, with the season, for example, so persistent observation of activities or events that one has already observed may be necessary. [27] WOLCOTT (2001) suggests that fieldworkers ask themselves if they are making good use of the opportunity to learn what it is they want to know. He further advises that fieldworkers ask themselves if what they want to learn makes the best use of the opportunity presented. [28]
HOW DOES ONE CONDUCT AN OBSERVATION? WHYTE (1979) notes that, while there is no one way that is best for conducting research using participant observation, the most effective work is done by researchers who view informants as collaborators; to do otherwise, he adds, is a waste of human resources. His emphasis is on the relationship between the researcher and informants as collaborative researchers who, through building solid relationships, improve the research process and improve the skills of the researcher to conduct research. [29] Conducting observations involves a variety of activities and considerations for the researcher, which include ethics, establishing rapport, selecting key informants, the processes for conducting observations, deciding what and when to observe, keeping field notes, and writing up one’s findings. In this section, these aspects of the research activities are discussed in more detail. [30]
Ethics A primary consideration in any research study is to conduct the research in an ethical manner, letting the community know that one’s purpose for observing is to document their activities. While there may be instances where covert observation methods might be appropriate, these situations are few and are suspect. DeWALT, DeWALT, and WAYLAND (1998) advise the researcher to take some of the field notes publicly to reinforce that what the researcher is doing is collecting data for research purposes. When the researcher meets community members for the first time, he/she should
66
Advanced Techniques for Collecting Statistical Data
be sure to inform them of the purpose for being there, sharing sufficient information with them about the research topic that their questions about the research and the researcher’s presence there are put to rest. This means that one is constantly introducing oneself as a researcher. [31] Another ethical responsibility is to preserve the anonymity of the participants in the final write-up and in field notes to prevent their identification, should the field notes be subpoenaed for inspection. Individual identities must be described in ways that community members will not be able to identify the participants. Several years ago, when I submitted an article for publication, one of the reviewers provided feedback that it would be helpful to the reader if I described the participants as, for example, “a 35 year old divorced mother of three, who worked at Wal-Mart.” This level of detail was not a feasible option for me in providing a description of individual participants, as it would have been easy for the local community members to identify these participants from such specific detail; this was a small community where everyone knew everyone else, and they would have known who the woman was. Instead, I only provided broad descriptions that lacked specific details, such as “a woman in her thirties who worked in the retail industry.” [32] DeWALT, DeWALT, and WAYLAND also point out that there is an ethical concern regarding the relationships established by the researcher when conducting participant observation; the researcher needs to develop close relationships, yet those relationships are difficult to maintain, when the researcher returns to his/her home at a distant location. It is typical for researchers who spend an extended period of time in a community to establish friendships or other relationships, some of which may extend over a lifetime; others are transient and extend only for the duration of the research study. Particularly when conducting cross-cultural research, it is necessary to have an understanding of cultural norms that exist. As MARSHALL and BATTEN (2004) note, one must address issues, such as potential exploitation and inaccuracy of findings, or other actions which may cause damage to the community. They suggest that the researcher take a participatory approach to research by including community members in the research process, beginning with obtaining culturally appropriate permission to conduct research and ensuring that the research addresses issues of importance to the community. They further suggest that the research findings be shared with the community to ensure accuracy of findings. In my own ongoing research projects with the Muscogee (Creek) people, I have maintained relationships with many of the people, including tribal leaders, tribal administrators, and
Participant Observation as a Data Collection Method
67
council members, and have shared the findings with selected tribal members to check my findings. Further, I have given them copies of my work for their library. I, too, have found that, by taking a participatory approach to my research with them, I have been asked to participate in studies that they wish to have conducted. [33]
Gaining Entry and Establishing Rapport Regarding entering the field, there are several activities that must be addressed. These include choosing a site, gaining permission, selecting key informants, and familiarizing oneself with the setting or culture (BERNARD, 1994). In this process, one must choose a site that will facilitate easy access to the data. The objective is to collect data that will help answer the research questions. [34] To assist in gaining permission from the community to conduct the study, the researcher may bring letters of introduction or other information that will ease entry, such as information about one’s affiliation, funding sources, and planned length of time in the field. One may need to meet with the community leaders. For example, when one wishes to conduct research in a school, permission must be granted by the school principal and, possibly, by the district school superintendent. For research conducted in indigenous communities, it may be necessary to gain permission from the tribal leader or council. [35] One should use personal contacts to ease entry; these would include key informants who serve as gatekeepers, but BERNARD cautions against choosing a gatekeeper who represents one side of warring factions, as the researcher may be seen as affiliated with that faction. He also cautions that, when using highly placed individuals as gatekeepers, the researcher may be expected to serve as a spy. AGAR (1980) suggests that the researcher be wary of accepting the first people he/she encounters in the research setting as key informants, as they may be “deviants” or “professional stranger handlers.” The former may be people who live on the fringe of the culture, and association with them may provide the researcher with erroneous views of the culture or may alienate the researcher from others who might better inform the study. The “professional stranger handlers” are those people who take upon themselves the job of finding out what it is the researcher is after and how it may affect the members of the culture. AGAR suggests finding a key informant to sponsor the researcher to facilitate his/her meeting those people who can provide the needed information. These key informants must
68
Advanced Techniques for Collecting Statistical Data
be people who are respected by other cultural members and who are viewed to be neutral, to enable the researcher to meet informants in all of the various factions found in the culture. [36] The researcher also should become familiar with the setting and social organization of the culture. This may involve mapping out the setting or developing social networks to help the researcher understand the situation. These activities also are useful for enabling the researcher to know what to observe and from whom to gather information. [37] “Hanging out” is the process through which the researcher gains trust and establishes rapport with participants (BERNARD, 1994). DeMUNCK and SOBO (1998) state that, “only through hanging out do a majority of villagers get an opportunity to watch, meet, and get to know you outside your ‘professional’ role” (p.41). This process of hanging out involves meeting and conversing with people to develop relationships over an extended period of time. There are three stages to the hanging out process, moving from a position of formal, ignorant intruder to welcome, knowledgeable intimate (DeMUNCK & SOBO). The first stage is the stage at which the researcher is a stranger who is learning the social rules and language, making herself/ himself known to the community, so they will begin to teach her/him how to behave appropriately in that culture. In the second stage, one begins to merge with the crowd and stand out less as an intruder, what DeMUNCK and SOBO call the “acquaintance” stage. During this stage, the language becomes more familiar to the researcher, but he/she still may not be fluent in its use. The third stage they mention is called the “intimate” stage, during which the researcher has established relationships with cultural participants to the extent that he/she no longer has to think about what he/she says, but is as comfortable with the interaction as the participants are with her/him being there. There is more to participant observation than just hanging out. It sometimes involves the researcher’s working with and participating in everyday activities beside participants in their daily lives. It also involves taking field notes of observations and interpretations. Included in this fieldwork is persistent observation and intermittent questioning to gain clarification of meaning of activities. [38] Rapport is built over time; it involves establishing a trusting relationship with the community, so that the cultural members feel secure in sharing sensitive information with the researcher to the extent that they feel assured that the information gathered and reported will be presented accurately and dependably. Rapport-building involves active listening, showing respect and
Participant Observation as a Data Collection Method
69
empathy, being truthful, and showing a commitment to the well-being of the community or individual. Rapport is also related to the issue of reciprocity, the giving back of something in return for their sharing their lives with the researcher. The cultural members are sharing information with the researcher, making him/her welcome in the community, inviting him/her to participate in and report on their activities. The researcher has the responsibility for giving something back, whether it is monetary remuneration, gifts or material goods, physical labor, time, or research results. Confidentiality is also a part of the reciprocal trust established with the community under study. They must be assured that they can share personal information without their identity being exposed to others. [39] BERNARD states that “the most important thing you can do to stop being a freak is to speak the language of the people you’re studying—and speak it well” (1994, p.145). Fluency in the native language helps gain access to sensitive information and increases rapport with participants. Learn about local dialects, he suggests, but refrain from trying to mimic local pronunciations, which may be misinterpreted as ridicule. Learning to speak the language shows that the researcher has a vested interest in the community, that the interest is not transient, and helps the researcher to understand the nuances of conversation, particularly what constitutes humor. [40] As mentioned in the discussion of the limitations of observation, BERNARD suggests that gender affects one’s ability to access certain information and how one views others. What is appropriate action in some cultures is dependent upon one’s gender. Gender can limit what one can ask, what one can observe, and what one can report. For example, several years after completing my doctoral dissertation with Muscogee (Creek) women about their perceptions of work, I returned for additional interviews with the women to gather specific information about more intimate aspects of their lives that had been touched on briefly in our previous conversations, but which were not reported. During these interviews, they shared with me their stories about how they learned about intimacy when they were growing up. Because the conversations dealt with sexual content, which, in their culture, was referred to more delicately as intimacy, I was unable to report my findings, as, to do so, would have been inappropriate. One does not discuss such topics in mixed company, so my writing about this subject might have endangered my reputation in the community or possibly inhibited my continued relationship with community members. I was forced to choose between publishing the findings, which would have benefited my
Advanced Techniques for Collecting Statistical Data
70
academic career, and retaining my reputation within the Creek community. I chose to maintain a relationship with the Creek people, so I did not publish any of the findings from that study. I also was told by the funding source that I should not request additional funds for research, if the results would not be publishable. [41]
The Processes of Conducting Observations Exactly how does one go about conducting observation? WERNER and SCHOEPFLE (1987, as cited in ANGROSINO & dePEREZ, 2000) focus on the process of conducting observations and describe three types of processes: The first is descriptive observation, in which one observes anything and everything, assuming that he/she knows nothing; the disadvantage of this type is that it can lead to the collection of minutiae that may or may not be relevant to the study. • The second type, focused observation, emphasizes observation supported by interviews, in which the participants’ insights guide the researcher’s decisions about what to observe. • The third type of observation, considered by ANGROSINO and DePEREZ to be the most systematic, is selective observation, in which the researcher focuses on different types of activities to help delineate the differences in those activities (ANGROSINO & dePEREZ, 2000, p.677). [42] Other researchers have taken a different approach to explaining how to conduct observations. For example, MERRIAM (1988) developed an observation guide in which she compiled various elements to be recorded in field notes. The first of these elements includes the physical environment. This involves observing the surroundings of the setting and providing a written description of the context. Next, she describes the participants in detail. Then she records the activities and interactions that occur in the setting. She also looks at the frequency and duration of those activities/ interactions and other subtle factors, such as informal, unplanned activities, symbolic meanings, nonverbal communication, physical clues, and what should happen that has not happened. In her 1998 book, MERRIAM adds such elements as observing the conversation in terms of content, who speaks to whom, who listens, silences, the researcher’s own behavior and how that role affects those one is observing, and what one says or thinks. [43] •
To conduct participant observation, one must live in the context to facilitate prolonged engagement; prolonged engagement is one of the
Participant Observation as a Data Collection Method
71
activities listed by LINCOLN and GUBA (1994) to establish trustworthiness. The findings are considered to be more trustworthy, when the researcher can show that he/she spent a considerable amount of time in the setting, as this prolonged interaction with the community enables the researcher to have more opportunities to observe and participate in a variety of activities over time. The reader would not view the findings as credible, if the researcher only spent a week in the culture; however, he/she would be more assured that the findings are accurate, if the researcher lived in the culture for an extended time or visited the culture repeatedly over time. Living in the culture enables one to learn the language and participate in everyday activities. Through these activities, the researcher has access to community members who can explain the meaning that such activities hold for them as individuals and can use conversations to elicit data in lieu of more formal interviews. [44] When I was preparing to conduct my ethnographic study with the Muscogee (Creek) women of Oklahoma, my professor, Valerie FENNELL, told me that I should take the attitude of “treat me like a little child who knows nothing,” so that my informants would teach me what I needed to know about the culture. I found this attitude to be very helpful in establishing rapport, in getting the community members to explain things they thought I should know, and in inviting me to observe activities that they felt were important for my understanding of their culture. DeWALT and DeWALT support the view of the ethnographer as an apprentice, taking the stance of a child in need of teaching about the cultural mores as a means for enculturation. KOTTAK (1994) defines enculturation as “the social process by which culture is learned and transmitted across generations” (p.16). Conducting observations involves such activities as “fitting in, active seeing, shortterm memory, informal interviewing, recording detailed field notes, and, perhaps most importantly, patience” (DeWALT & DeWALT, 2002, p.17). DeWALT and DeWALT extend this list of necessary skills, adding MEAD’s suggested activities, which include developing tolerance to poor conditions and unpleasant situations, resisting impulsiveness, particularly interrupting others, and resisting attachment to particular factions or individuals. [45] ANGROSINO and DePEREZ (2000) advocate using a structured observation process to maximize the efficiency of the field experience, minimize researcher bias, and facilitate replication or verification by others, all of which make the findings more objective. This objectivity, they explain, occurs when there is agreement between the researcher and the participants as to what is going on. Sociologists, they note, typically use document
72
Advanced Techniques for Collecting Statistical Data
analysis to check their results, while anthropologists tend to verify their findings through participant observation. [46] BERNARD (1994) states that most basic anthropological research is conducted over a period of about a year, but recently there have been participant observations that were conducted in a matter of weeks. In these instances, he notes the use of rapid assessment techniques that include “going in and getting on with the job of collection data without spending months developing rapport. This means going into a field situation armed with a lot of questions that you want to answer and perhaps a checklist of data that you need to collect” (p.139). [47] In this instance the cultural members are taken into the researcher’s confidence as research partners to enable him/her to get the questions answered. BERNARD notes that those anthropologists who are in the field for extended periods of time are better able to obtain information of a sensitive nature, such as information about witchcraft, sexuality, political feuds, etc. By staying involved with the culture over a period of years, data about social changes that occur over time are more readily perceived and understood. [48] BERNARD and his associates developed an outline of the stages of participant observation fieldwork that includes initial contact; shock; discovering the obvious; the break; focusing; exhaustion, the second break, and frantic activity; and leaving. In ethnographic research, it is common for the researcher to live in the culture under study for extended periods of time and to return home for short breaks, then return to the research setting for more data collection. When the researcher encounters a culture that is different from his/her own and lives in that culture, constantly being bombarded by new stimuli, culture shock results. Researchers react differently to such shock. Some may sit in their motel room and play cards or read novels to escape. Others may work and rework data endlessly. Sometimes the researcher needs to take a break from the constant observation and note taking to recuperate. When I conducted my dissertation fieldwork, I stayed in a local motel, although I had been invited to stay at the home of some community members. I chose to remain in the motel, because this enabled me to have the down time in the evenings that I needed to write up field notes and code and analyze data. Had I stayed with friends, they may have felt that they had to entertain me, and I would have felt obligated to spend my evenings conversing or participating in whatever activities they had planned, when I needed some time to myself to be alone, think, and “veg” out. [49]
Participant Observation as a Data Collection Method
73
The aspects of conducting observations are discussed above, but these are not the only ways to conduct observations. DeMUNCK and SOBO use freelisting to elicit from cultural members items related to specific categories of information. Through freelisting, they build a dictionary of coded responses to explain various categories. They also suggest the use of pile sorting, which involves the use of cards that participants sort into piles according to similar topics. The process involves making decisions about what topics to include. Such card pile sorting processes are easy to administer and may be meaningful to the participant’s world and frames of reference (DeMUNCK & SOBO, 1998). [50] A different approach to observation, consensus analysis, is a method DeMUNCK and SOBO describe to design sampling frames for ethnographic research, enabling the researcher to establish the viewpoints of the participants from the inside out. This involves aspects of ethnographic fieldwork, such as getting to know participants intimately to understand their way of thinking and experiencing the world. It further involves verifying information gathered to determine if the researcher correctly understood the information collected. The question of whether one has understood correctly lends itself to the internal validity question of whether the researcher has correctly understood the participants. Whether the information can be generalized addresses the external validity in terms of whether the interpretation is transferable from the sample to the population from which it was selected. DeMUNCK and SOBO note that the ethnographer begins with a topic and discusses that topic with various people who know about it. He/She selects a variety of people who know about the topic to include in the sample, remembering that not everyone has the same opinion or experience about the topic. They suggest using a nested sampling frame to determine differences in knowledge about a topic. To help determine the differences, the researcher should ask the participants if they know people who have a different experience or opinion of the topic. Seeking out participants with different points of view enables the researcher to fully flesh out understanding of the topic in that culture. DeMUNCK and SOBO also suggest talking with anyone who is willing to teach you. [51]
TIPS FOR COLLECTING USEFUL OBSERVATION DATA TAYLOR and BOGDAN (1984) provided several tips for conducting observations after one has gained entry into the setting under study. They suggest that the researcher should:
Advanced Techniques for Collecting Statistical Data
74
• • •
be unobtrusive in dress and actions; become familiar with the setting before beginning to collect data; keep the observations short at first to keep from becoming overwhelmed; • be honest, but not too technical or detailed, in explaining to participants what he/she is doing. [52] MERRIAM (1998) adds that the researcher should: •
pay attention, shifting from a “wide” to a “narrow” angle perspective, focusing on a single person, activity, interaction, then returning to a view of the overall situation; • look for key words in conversations to trigger later recollection of the conversation content; • concentrate on the first and last remarks of a conversation, as these are most easily remembered; • during breaks in the action, mentally replay remarks and scenes one has observed. [53] DeWALT and DeWALT (2002) make these suggestions: • •
Actively observe, attending to details one wants to record later. Look at the interactions occurring in the setting, including who talks to whom, whose opinions are respected, how decisions are made. Also observe where participants stand or sit, particularly those with power versus those with less power or men versus women. • Counting persons or incidents of observed activity is useful in helping one recollect the situation, especially when viewing complex events or events in which there are many participants. • Listen carefully to conversations, trying to remember as many verbatim conversations, nonverbal expressions, and gestures as possible. To assist in seeing events with “new eyes,” turn detailed jottings into extensive field notes, including spatial maps and interaction maps. Look carefully to seek out new insights. • Keep a running observation record. [54] WOLCOTT (2001) adds to the discussion of how to conduct observations. He suggests that, to move around gracefully within the culture, one should: •
practice reciprocity in whatever terms are appropriate for that culture;
Participant Observation as a Data Collection Method
75
•
be tolerant of ambiguity; this includes being adaptable and flexible; • have personal determination and faith in oneself to help alleviate culture shock. [55] He further shares some tips for doing better participant observation (pp.96-100). •
•
•
•
When one is not sure what to attend to, he/she should look to see what it is that he/she is attending to and try to determine how and why one’s attention has been drawn as it has. One should take note of what he/she is observing, what is being put into the field notes and in how much detail, and what one is noting about the researcher’s personal experience in conducting the research. The process of note taking is not complete until one has reviewed his/her notes to make sure that he/she is coupling the analysis with observations throughout the process to keep the researcher on track. The researcher should review constantly what he/she is looking for and whether he/she is seeing it or is likely to do so in the circumstances for observation presented. It may be necessary to refocus one’s attention to what is actually going on. This process involves looking for recurring patterns or underlying themes in behavior, action or inaction. He/she should also reflect on what someone from another discipline might find of interest there. He/ she should look at her/his participation, what he/she is observing and recording, in terms of the kind of information he/she will need to report rather than what he/she feels he/she should collect. Being attentive for any length of time is difficult to do. One tends to do it off and on. One should be aware that his/her attention to details comes in short bursts that are followed by inattentive rests, and those moments of attention should be capitalized upon. One should reflect on the note taking process and subsequent writing-up practices as a critical part of fieldwork, making it part of the daily routine, keeping the entries up to date. The elaborated note taking also provides a connection between what he/she is experiencing and how he/she is translating that experience into a form that can be communicated to others. He/she should make a habit of including in one’s field notes such specifics as day, date, and time, along with a simple coding system for keeping track
76
Advanced Techniques for Collecting Statistical Data
of entries, and reflections on and about one’s mood, personal reactions, and random thoughts, as these may help to recapture detail not written down. One should also consider beginning to do some writing as fieldwork proceeds. One should take time frequently to draft expanded pieces written using “thick description,” as described by GEERTZ (1973), so that such details might later be incorporated into the final write up. • One should take seriously the challenge of participating and focus, when appropriate, on one’s role as participant over one’s role as observer. Fieldwork involves more than data gathering. It may also involve informal interviews, conversations, or more structured interviews, such as questionnaires or surveys. [56] BERNARD notes that one must become explicitly aware, being attentive in his/her observations, reporting what is seen, not inferred. It is natural to impose on a situation what is culturally correct, in the absence of real memories, but building memory capacity can be enhanced by practicing reliable observation. If the data one collects is not reliable, the conclusions will not be valid. BERNARD advises that the researcher not talk to anyone after observing, until he/she has written down his/her field notes. He advocates that he/she try to remember things in historical/chronological order and draw a map of the physical space to help him/her remember details. He also suggests that the researcher maintain naiveté, assuming an attitude of learner and being guided by participants’ teaching without being considered stupid, incompetent, or dangerous to their wellbeing. Sometimes, he points out, one’s expertise is what helps to establish rapport. Having good writing skills, that is, writing concisely and compellingly, is also necessary to good participant observation. The researcher must learn to ‘hang out’ to enable him/her to ask questions when appropriate and to ask appropriate questions. Maintaining one’s objectivity means realizing and acknowledging one’s biases, assumptions, prejudices, opinions, and values. [57]
KEEPING AND ANALYZING FIELD NOTES AND WRITING UP THE FINDINGS KUTSCHE (1998) suggests that, when mapping out a setting, one must first learn to put aside his/her preconceptions. The process of mapping, as he describes it, involves describing the relationship between the sociocultural behavior one observes and the physical environment. The researcher should draw a physical map of the setting, using as much detail as possible.
Participant Observation as a Data Collection Method
77
KUTSCHE suggests that the researcher visit the setting under study at different times of the day to see how it is used differently at different times of the day/night. He/she should describe without judgment and avoid using meaningless adjectives, such as “older” (older than what/whom?) or “pretty” (as compared to what/whom?); use adjectives that help to describe the various aspects of the setting meaningfully (what is it that makes the house inviting?). When one succeeds in avoiding judgment, he/she is practicing cultural relativism. This mapping process uses only one of the five senses— vision. “Human events happen in particular places, weathers, times, and so forth. If you are intrigued, you will be pleased to know that what you are doing is a subdiscipline of anthropology called cultural ecology” (p.16). It involves looking at the interaction of the participants with the environment. STEWARD (1955, as cited in KUTSCHE, 1998), a student of KROEBER (1939, as cited in KUTSCHE, 1998), who wrote about Native American adaptations to North American environments, developed a theory called “multilinear evolution” in which he described how cultural traditions evolve related to specific environments. “Cultural systems are not just rules for behavior, ways of surviving, or straitjackets to constrict free expression ... All cultures, no matter how simple or sophisticated, are also rhythms, music, architecture, the dances of living. ... To look at culture as style is to look at ritual” (p.49). [58] KUTSCHE refers to ritual as being the symbolic representation of the sentiments in a situation, where the situation involves person, place, time, conception, thing, or occasion. Some of the examples of cultural rituals KUTSCHE presents for analysis include rites of deference or rites of passage. Ritual and habit are different, KUTSCHE explains, in that habits have no symbolic expression or meaning (such as tying one’s shoes in the same way each time). [59] In mapping out the setting being observed, SCHENSUL, SCHENSUL, and LeCOMPTE (1999) suggest the following be included: • • • •
a count of attendees, including such demographics as age, gender, and race; a physical map of the setting and description of the physical surroundings; a portrayal of where participants are positioned over time; a description of the activities being observed, detailing activities of interest. [60]
Advanced Techniques for Collecting Statistical Data
78
They indicate that counting, census taking, and mapping are important ways to help the researcher gain a better understanding of the social setting in the early stages of participation, particularly when the researcher is not fluent in the language and has few key informants in the community. [61] Social differences they mention that are readily observed include differences among individuals, families, or groups by educational level, type of employment, and income. Things to look for include the cultural members’ manner of dress and decorative accoutrements, leisure activities, speech patterns, place of residence and choice of transportation. They also add that one might look for differences in housing structure or payment structure for goods or services. [62] Field notes are the primary way of capturing the data that is collected from participant observations. Notes taken to capture this data include records of what is observed, including informal conversations with participants, records of activities and ceremonies, during which the researcher is unable to question participants about their activities, and journal notes that are kept on a daily basis. DeWALT, DeWALT, and WAYLAND describe field notes as both data and analysis, as the notes provide an accurate description of what is observed and are the product of the observation process. As they note, observations are not data unless they are recorded into field notes. [63] DeMUNCK and SOBO (1998) advocate using two notebooks for keeping field notes, one with questions to be answered, the other with more personal observations that may not fit the topics covered in the first notebook. They do this to alleviate the clutter of extraneous information that can occur when taking. Field notes in the first notebook should include jottings, maps, diagrams, interview notes, and observations. In the second notebook, they suggest keeping memos, casual “mullings, questions, comments, quirky notes, and diary type entries” (p.45). One can find information in the notes easily by indexing and cross-referencing information from both notebooks by noting on index cards such information as “conflicts, gender, jokes, religion, marriage, kinship, men’s activities, women’s activities, and so on” (p.45). They summarize each day’s notes and index them by notebook, page number, and a short identifying description. [64] The feelings, thoughts, suppositions of the researcher may be noted separately. SCHENSUL, SCHENSUL, and LeCOMPTE (1999) note that good field notes: • •
use exact quotes when possible; use pseudonyms to protect confidentiality;
Participant Observation as a Data Collection Method
79
• • • •
describe activities in the order in which they occur; provide descriptions without inferring meaning; include relevant background information to situate the event; separate one’s own thoughts and assumptions from what one actually observes; • record the date, time, place, and name of researcher on each set of notes. [65] Regarding coding their observation notes, DeMUNCK and SOBO (1998) suggest that coding is used to select and emphasize information that is important enough to record, enabling the researcher to weed out extraneous information and focus his/her observations on the type of information needed for the study. They describe codes as “rules for organizing symbols into larger and more meaningful strings of symbols. It is important, no imperative, to construct a coding system not because the coding system represents the ‘true’ structure of the process you are studying, but because it offers a framework for organizing and thinking about the data” (p.48). [66] KUTSCHE states that, when one is trying to analyze interview information and observation field notes, he/she is trying to develop a model that helps to make sense of what the participants do. One is constructing a model of culture, not telling the truth about the data, as there are numerous truths, particularly when presented from each individual participant’s viewpoint. The researcher should set out an outline of the information he/ she has, organize the information according to the outline, then move the points around as the argument of one’s study dictates. He further suggests that he/she organize the collected data into a narrative in which one may tell the story of a day or a week in the lives of informants, as they may have provided information in these terms in response to grand tour questions, that is, questions that encourage participants to elaborate on their description of a cultural scene (SPRADLEY, 1979). Once the data have been organized in this way, there will probably be several sections in the narrative that reflect one’s interpretation of certain themes that make the cultural scene clear to the reader. He further suggests asking participants to help structure the report. In this way, member checks and peer debriefing occur to help ensure the trustworthiness of the data (LINCOLN & GUBA, 1994). [67] When writing up one’s description of a ritual, KUTSCHE advises the researcher to make a short draft of the ritual and then take specific aspects
80
Advanced Techniques for Collecting Statistical Data
to focus on and write up in detail with one’s analysis. It is the analysis that differentiates between creative writing and ethnology, he points out. When writing up one’s ethnographic observations, KUTSCHE advises that the researcher follow the lead of SPRADLEY and McCURDY (1972) and find a cultural scene, spend time with the informants, asking questions and clarifying answers, analyze the material, pulling together the themes into a well-organized story. Regarding developing models, he indicates that the aim is to construct a picture of the culture that reflects the data one has collected. He bases his model development on guidelines by Ward H. GOODENOUGH, who advocates that the first level of development includes what happens, followed by a second level of development which includes what the ethnographer has observed, subsequently followed by a third level including what was recorded in the field, and finally followed by a fourth level derived from one’s notes. He adds that GOODENOUGH describes a fifth level, in which ethnological theory is developed from separate models of separate cultures. KUTSCHE defines models as having four properties described by LEVI-STRAUSS (1953, p.525, as cited in KUTSCHE,1998), two of which are pertinent to this discussion: the first property, in which the structure exhibits the characteristics of a system, and the fourth property, in which the model makes clear all observed facts. [68] WOLCOTT indicates that fieldworkers of today should put themselves into their written discussion of the analysis without regaling the reader with self-reports of how well they did their job. This means that there will be a bit of postmodern auto-ethnographic information told in the etic or researcher’s voice (PIKE, 1966), along with the participants’ voices which provide the emic perspective (PIKE, 1966). Autoethnography, in recent years, has become an accepted means for illustrating the knowledge production of researchers from their own perspective, incorporating their own feelings and emotions into the mix, as is illustrated by Carolyn ELLIS (i.e., ELLIS, 2003, and HOLMAN JONES, 2004). [69]
TEACHING PARTICIPANT OBSERVATION Throughout the past eight or so years of teaching qualitative research courses, I have developed a variety of exercises for teaching observation skills, based on techniques I observed from other researchers and teachers of qualitative research or techniques described in others’ syllabi. Over time, I have revised others’ exercises and created my own to address the needs of my students in learning how to conduct qualitative research. Below are several of those
Participant Observation as a Data Collection Method
81
exercises that other professors of qualitative research methods may find useful. [70] Memory Exercise—Students are asked to think of a familiar place, such as a room in their home, and make field notes that include a map of the setting and a physical description of as much as they can remember of what is contained in that setting. They are then asked to compare their recollections with the actual setting to see what they were able to remember and how well they were able to do so. The purpose of this exercise is to help students realize how easy it is to overlook various aspects that they have not consciously tried to remember. In this way, they begin to be attentive to details and begin to practice active observing skills. [71] Sight without sound—In this exercise, students are asked to find a setting in which they are able to see activity but in which they are unable to hear what is being said in the interaction. For a specified length of time (5 to 10 minutes), they are to observe the action/interaction, and record as much information as they can in as much detail as possible. This exercise has also been done by turning off the sound on the television and observing the actions/interactions on a program; students, in this case, are instructed to find a television program with which they are unfamiliar, so they are less apt to impose upon their field notes what they believe they know about familiar characters or programs. This option is less desirable, as students sometimes find it difficult to find a program with which they do not have some familiarity. The purpose of the exercise is to teach the students to begin observing and taking in information using their sight. [72] Instructions for writing up their field notes include having them begin by drawing a map of the setting and providing a description of the participants. By having them record on one side of their paper what information they take in through their senses and on the other side whatever thoughts, feelings, ideas they have about what is happening, they are more likely to begin to see the difference in observed data and their own construction or interpretation of the activity. This exercise also helps them realize the importance of using all of their senses to take in information and the importance of observing both the verbal and the nonverbal behaviors of the situation. Possible settings for observation in this exercise have included sitting inside fastfood restaurants, viewing the playground, observing interactions across parking lots or mall food courts, or viewing interactions at a distance on the subway, for example. [73]
82
Advanced Techniques for Collecting Statistical Data
Sound without sight—In this exercise, similar to the above exercise, students are asked to find a setting in which they are able to hear activity/ interactions, but in which they are unable to see what is going on. Again, for a specified length of time, they are asked to record as much as they can hear of the interaction, putting their thoughts, feelings, and ideas about what is happening on the right side of the paper, and putting the information they take in with their senses on the left hand side of the paper. Before beginning, they again are asked to describe the setting, but, if possible, they are not to see the participants in the setting under study. In this way, they are better able to note their guesses about the participants’ ages, gender, ethnicity, etc. My students have conducted this exercise in restaurants, listening to conversations of patrons in booths behind them, while sitting on airplanes or other modes of transportation, or by sitting outside classrooms where students were interacting, for example. A variation of this exercise is to have students turn their backs to the television or listen to a radio program with which they are unfamiliar, and have them conduct the exercise in that fashion, without sight to guide their interpretations. [74] In both of these examples, male students are cautioned to stay away from playgrounds or other settings where there actions may be misconstrued. They are further cautioned against sitting in vehicles and observing, as several of my students have been approached by security or police officers who questioned them about their actions. The lesson here is that, while much information can be taken in through hearing conversations, without the body language, meanings can be misconstrued. Further, they usually find it interesting to make guesses about the participants in terms of age, gender, ethnicity, and relationship to other participants in the setting, based on what they heard. [75] In both of these examples, it is especially interesting when one student conducts the sight without sound and another students conducts the sound without sight exercise using the same interaction/setting, as their explanations, when shared in class, sometimes illustrate how easy it is to put one’s own construction on what is actually happening. [76] Photographic Observation—This exercise encourages students to use photographs to help them remember activities, and photographs can serve as illustrations of aspects of activities that are not easily described. Students are asked to take a series of 12 to 36 photographs of an activity, and provide a written description of the activity that tells the story of what is happening in the activity, photo by photo. They are instructed to number the photographs
Participant Observation as a Data Collection Method
83
and take notes as they take pictures to help them keep the photos organized in the right sequence. Several students have indicated that this was a fun exercise in which their children, who were the participants in the activity, were delighted to be involved; they also noted that this provided them with a pictographic recollection of a part of their children’s lives that would be a keepsake. One student recorded her 6 year old daughter’s first formal tea party, for example. [77] Direct Observation—In this instance, students are asked to find a setting they wish to observe in which they will be able to observe without interruption and in which they will not be participating. For some specified length of time (about 15 to 30 minutes), they are asked to record everything they can take in through their senses about that setting and the interactions contained therein for the duration of the time period, again recording on one side of the paper their field notes from observation and on the other side their thoughts, feelings, and ideas about what is happening. Part of the lesson here is that, when researchers are recording aspects of the observation, whether it be the physical characteristics of the setting or interactions between participants, they are unable to both observe and record. This exercise is also good practice for getting them to write detailed notes about what is or is not happening, about the physical surroundings, and about interactions, particularly conversations and the nonverbal behaviors that go along with those conversations. [78] Participant Observation—Students are asked to participate in some activity that takes at least 2 hours, during which they are not allowed to take any notes. Having a few friends or family members over for dinner is a good example of a situation where they must participate without taking notes. In this situation, the students must periodically review what they want to remember. They are instructed to remember as much as possible, then record their recollections in as much detail as they can remember as soon as possible after the activity ends. Students are cautioned not to talk to anyone or drink too much, so their recollections will be unaltered. The lesson here is that they must consciously try to remember bits of conversation and other details in chronological order. [79] When comparing their field notes from direct observation to participant observation, the students may find that their notes from direct observation (without participation) are more detailed and lengthy than with participant observation; however, through participation, there is more involvement in the activities under study, so there is likely to be better interpretation of
Advanced Techniques for Collecting Statistical Data
84
what happened and why. They also may find that participant observation lends itself better to recollecting information at a later time than direct observation. [80]
SUMMARY Participant observation involves the researcher’s involvement in a variety of activities over an extended period of time that enable him/her to observe the cultural members in their daily lives and to participate in their activities to facilitate a better understanding of those behaviors and activities. The process of conducting this type of field work involves gaining entry into the community, selecting gatekeepers and key informants, participating in as many different activities as are allowable by the community members, clarifying one’s findings through member checks, formal interviews, and informal conversations, and keeping organized, structured field notes to facilitate the development of a narrative that explains various cultural aspects to the reader. Participant observation is used as a mainstay in field work in a variety of disciplines, and, as such, has proven to be a beneficial tool for producing studies that provide accurate representation of a culture. This paper, while not wholly inclusive of all that has been written about this type of field work methods, presents an overview of what is known about it, including its various definitions, history, and purposes, the stances of the researcher, and information about how to conduct observations in the field. [81]
Notes 1)
2)
Validity is a term typically associated with quantitative research; however, when viewed in terms of its meaning of reflecting what is purported to be measured/observed, its use is appropriate. Validity in this instance may refer to context validity, face validity or trustworthiness as described by LINCOLN and GUBA (1994). Many years after MEAD studied the Samoan girls, FREEMAN replicated MEAD’s study and derived different interpretations. FREEMAN’s study suggested that MEAD’s informants had misled her by telling her what they wanted her to believe, rather than what was truthful about their activities.
Participant Observation as a Data Collection Method
85
REFERENCES Adler, Patricia A. & Adler, Peter (1987). Membership roles in field research. Newbury Park: Sage. 2. Adler, Patricia A. & Adler, Peter (1994). Observation techniques. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Handbook of qualitative research (pp.377-392). Thousand Oaks, CA: Sage. 3. Agar, Michael H. (1980). The professional stranger: an informal introduction to ethnography. SanDiego: Academic Press. 4. Angrosino, Michael V. & Mays dePerez, Kimberly A. (2000). Rethinking observation: From method to context. In Norman K. Denzin & Yvonna S. Lincoln (Eds.), Handbook of Qualitative Research (second edition, pp.673-702), Thousand Oaks, CA: Sage. 5. Bernard, H. Russell (1994). Research methods in anthropology: qualitative and quantitative approaches (second edition). Walnut Creek, CA: AltaMira Press. 6. Bernard, H. Russell (Ed.) (1998). Handbook of methods in cultural anthropology. Walnut Creek: AltaMira Press. 7. Breuer, Franz & Roth, Wolff-Michael (2003, May). Subjectivity and reflexivity in the social sciences: epistemic windows and methodical consequences [30 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 4(2), Art.25. Available at http://www.qualitative-research.net/fqs-texte/2-03/203intro-3-e.htm [April, 5, 2005]. 8. deMunck, Victor C. & Sobo, Elisa J. (Eds) (1998). Using methods in the field: a practical introduction and casebook. Walnut Creek, CA: AltaMira Press. 9. DeWalt, Kathleen M. & DeWalt, Billie R. (1998). Participant observation. In H. Russell Bernard (Ed.), Handbook of methods in cultural anthropology (pp.259-300). Walnut Creek: AltaMira Press. 10. DeWalt, Kathleen M. & DeWalt, Billie R. (2002). Participant observation: a guide for fieldworkers. Walnut Creek, CA: AltaMira Press. 11. Ellis, Carolyn (2003, May). Grave tending: with mom at the cemetery [8 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social research [On-line Journal], 4(2), Art.28. Available at http:// www.qualitative-research.net/fqs-texte/2-03/2-03ellis-e.htm [April 5, 2005]. 1.
86
Advanced Techniques for Collecting Statistical Data
12. Erlandson, David A.; Harris, Edward L.; Skipper, Barbara L. & Allen, Steve D. (1993). Doing naturalistic inquiry: a guide to methods. Newbury Park, CA: Sage. 13. Fine, Gary A. (2003). Towards a peopled ethnography developing theory from group life. Ethnography, 4(1), 41-60. 14. Gaitan, Alfredo (2000, November). Exploring alternative forms of writing ethnography. Review Essay: Carolyn Ellis and Arthur Bochner (Eds.) (1996). Composing ethnography: Alternative forms of qualitative writing [9 paragraphs}. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 1(3), Art.42. Available at: http://www.qualitative-research.net/fqs-texte/3-00/300review-gaitan-e.htm [April, 5, 2005]. 15. Gans, Herbert J. (1999). Participant observation in the era of “ethnography.” Journal of Contemporary Ethnography, 28(5), 540548. 16. Geertz, Clifford (1973). Thick description: Towards an interpretive theory of culture. In Clifford Geertz (Ed.), The interpretation of cultures (pp.3-32). New York: Basic Books. 17. Glantz, Jeffrey & Sullivan, Susan (2000). Supervision in practice: 3 Steps to improving teaching and learning. Corwin Press, Inc. 18. Glickman, Carl D.; Gordon, Stephen P. & Ross-Gordon, Jovita (1998). Supervision of instruction (fourth edition). Boston: Allyn & Bacon. 19. Gold, Raymond L. (1958). Roles in sociological field observations. Social Forces, 36, 217-223. 20. Holman Jones, Stacy (2004, September). Building connections in qualitative research. Carolyn Ellis and Art Bochner in conversation with Stacy Holman Jones [113 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 5(3), Art.28. Available at http://www.qualitative-research. net/fqs-texte/3-04/04-3-28-e.htm [April 5, 2005]. 21. Johnson, Allen & Sackett, Ross (1998). Direct systematic observation of behavior. In H. Russell Bernard (Ed.), Handbook of methods in cultural anthropology (pp.301-332). Walnut Creek: AltaMira Press. 22. Kawulich, Barbara B. (1998). Muscogee (Creek) women’s perceptions of work (Unpublished doctoral dissertation, Georgia State University).
Participant Observation as a Data Collection Method
87
23. Kawulich, Barbara B. (2004). Muscogee women’s identity development. In Mark Hutter (Ed.), The family experience: a reader in cultural diversity (pp.83-93). Boston: Pearson Education. 24. Kottak, Conrad P. (1994). Cultural anthropology (sixth edition). New York: McGraw-Hill. 25. Kroeber, Alfred L. (1939). Cultural and natural areas of Native North America. Berkeley: University of California Press. 26. Kutsche, Paul (1998). Field ethnography: a manual for doing cultural anthropology. Upper Saddle River, NJ: Prentice Hall. 27. Levi-Strauss, Claude (1953). Social structure. In Alfred L. Kroeber (Ed.), Anthropology today (pp.24-53). Chicago: University of Chicago Press. 28. Lincoln, Yvonna S. & Guba, Egon G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage. 29. Marshall, Anne & Batten, Suzanne (2004, September). Researching across cultures: issues of ethics and power [17 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [Online Journal], 5(3), Art.39. Available at: http://www.qualitativeresearch.net/fqs-texte/3-04/04-3-39-e.htm [April 5, 2005]. 30. Marshall, Catherine & Rossman, Gretchen B. (1989). Designing qualitative research. Newbury Park, CA: Sage. 31. Marshall, Catherine & Rossman, Gretchen B. (1995). Designing qualitative research. Newbury Park, CA: Sage. 32. Merriam, Sharan B. (1988). Case study research in education: a qualitative approach. San Francisco: Jossey-Bass Publishers. 33. Merriam, Sharan B. (1998). Qualitative research and case study applications in education. San Francisco: Jossey-Bass Publishers. 34. Pike, Kenneth L. (1966). Emic and etic standpoints for the description of behavior. In Alfred G. Smith (Ed.), Communication and culture (pp.52163). New York: Holt, Reinhart & Winston. 35. Ratner, Carl (2002, September). Subjectivity and objectivity in qualitative methodology [29 paragraphs]. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research [On-line Journal], 3(3), Art.16. Available at: http://www.qualitative-research. net/fqs-texte/3-02/3-02ratner-e.htm [April 5, 2005].
88
Advanced Techniques for Collecting Statistical Data
36. Schensul, Stephen L.; Schensul, Jean J. & LeCompte, Margaret D. (1999). Essential ethnographic methods: observations, interviews, and questionnaires (Book 2 in Ethnographer›s Toolkit). Walnut Creek, CA: AltaMira Press. 37. Schmuck, Richard (1997). Practical action research for change. Arlington Heights, IL: IRI/Skylight Training and Publishing. 38. Spradley, James P. (1979). The ethnographic interview. Fort Worth: Harcourt Brace Jovanovich College Publishers. 39. Spradley, James P. (1980). Participant observation. New York: Holt, Rinehart and Winston. 40. Spradley, James P. & McCurdy, David W. (1972). The Cultural Experience. Chicago: Science Research Associates. 41. Steward, Julian H. (1955). Theory of culture change: the methodology of multilinear evolution. Urbana: University of Illinois Press. 42. Taylor, Steven J. & Bogdan, Robert (1984). Introduction to qualitative research: The search for meanings (second edition). New York: John Wiley. 43. Werner Oswald & Schoepfle, G. Mark (1987). Systematic fieldwork: Vol. 1. Foundations of ethnography and interviewing. Newbury Park, CA: Sage Publications. 44. Whyte, William F. (February, 1979). On making the most of participant observation. The American Sociologist, 14, 56-66. 45. Wolcott, Harry F. (2001). The art of fieldwork. Walnut Creek, CA: AltaMira Press.
Chapter
ATTITUDES TOWARDS PARTICIPATION IN A PASSIVE DATA COLLECTION EXPERIMENT
6
Bence Ságvári,1,2, Attila Gulyás,1 and Júlia Koltai1,3,4 Computational Social Science—Research Center for Educational and Network Studies (CSS–RECENS), Centre for Social Sciences, Tóth Kálmán Utca 4, 1097 Budapest, Hungary
1
Institute of Communication and Sociology, Corvinus University, Fővám tér 8, 1093 Budapest, Hungary
2
Department of Network and Data Science, Central European University, Quellenstraße 51, 1100 Vienna, Austria
3
Faculty of Social Sciences, Eötvös Loránd University of Sciences, Pázmány Péter Sétány 1/A, 1117 Budapest, Hungary
4
ABSTRACT In this paper, we present the results of an exploratory study conducted in Hungary using a factorial design-based online survey to explore the Citation: (APA): Ságvári, B., Gulyás, A., & Koltai, J. (2021). Attitudes towards Participation in a Passive Data Collection Experiment. Sensors, 21(18), 6085. (18 pages) Copyright: © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
90
Advanced Techniques for Collecting Statistical Data
willingness to participate in a future research project based on active and passive data collection via smartphones. Recently, the improvement of smart devices has enabled the collection of behavioural data on a previously unimaginable scale. However, the willingness to share this data is a key issue for the social sciences and often proves to be the biggest obstacle to conducting research. In this paper we use vignettes to test different (hypothetical) study settings that involve sensor data collection but differ in the organizer of the research, the purpose of the study and the type of collected data, the duration of data sharing, the number of incentives and the ability to suspend and review the collection of data. Besides the demographic profile of respondents, we also include behavioural and attitudinal variables to the models. Our results show that the content and context of the data collection significantly changes people’s willingness to participate, however their basic demographic characteristics (apart from age) and general level of trust seem to have no significant effect. This study is a first step in a larger project that involves the development of a complex smartphone-based research tool for hybrid (active and passive) data collection. The results presented in this paper help improve our experimental design to encourage participation by minimizing data sharing concerns and maximizing user participation and motivation. Keywords: data fusion, surveys, informed consent
INTRODUCTION Smartphone technologies combined with the improvement of cloud-based research architecture offers great opportunities in social sciences. The most common methodology in the social sciences is still the use of surveys and other approaches that require the active participation of research subjects. However, there are some areas that are best researched not through surveys, but rather by observing individuals’ behaviour in a continuous social experiment. Mobile technologies make it possible to observe behaviour on a new level by using raw data of various kinds collected by our most common everyday companion: our smartphone. Moreover, since smartphones shape our daily lives thanks to various actions available through countless apps, it is logical to consider them as a platform for actual research. There have been numerous research projects that have relied on collecting participants’ mobile sensor and app usage data, but the biggest concern has been the willingness to share this data. Privacy and trust
Attitudes towards Participation in a Passive Data Collection Experiment
91
concerns both contribute to people’s unwillingness to provide access to their personal data, and uncovering these attitudes is a critical step for any successful experimental design. In this paper, we present the results of our pre-experimental survey to uncover prospective participants’ attitudes toward sharing their mobile sensor and app usage data. This experiment is part of a larger research and software development project aimed at creating a modular active and passive data collection tool for smartphones that could be used in social and health research. For this study we used data from an online survey representative of Internet users in Hungary. The aim of the survey was to analyse respondents’ attitudes (and not actual behaviour) towards using a hypothetical research app that performs active and passive data collection. The following section provides further details on the background of active/passive data collection and an outlook on results from other studies. We then discuss the details of the online panel used in our study, the survey design and the models used in the analysis. After presenting our results, we conclude by mentioning some open questions and limitations that can be addressed in further steps of this study.
BACKGROUND Surveys, Active and Passive Data Collection In recent decades survey methods have been the main research tools in the social sciences. Technological advances have not changed that, but rather expanded it. Traditional paper-and-pencil interviews (PAPI) and surveys quickly adopted new technologies: Interviews were conducted over telephones (regular surveys) and as computers became mainstream, computer aided survey methods emerged. This development took another leap when smartphone applications emerged along with cloud-based services and smartphones suddenly became a viable platform for collecting survey data [1,2,3,4]. Although selfreported surveys generally suffer from bias for a variety of reasons [5,6,7,8], conducting surveys with smartphones is a very cost-effective method of data collection that also opens up opportunities to collect other non-survey types of data. Such data includes location information, application usage, media consumption etc. all of which provide better insight into the behaviour and
92
Advanced Techniques for Collecting Statistical Data
social connections of individuals [9,10,11,12,13]. More importantly, since it is behavioural data, it is much less prone to bias, unlike ordinary surveys. The collection of data is divided into two main categories depending on the subscriber’s interaction with their smartphone: active and passive data collection. Active data collection means that an action by the participant is required to generate the collected information, and the participant is prompted by the research application to provide this information. This means that the participant triggers phone features (taking photos, recording other types of data, actively sending a location tag) while also giving consent for this data to be sent to the researching institution. Submitting surveys or survey-like inputs (e.g., gathering attitudes or moods) [14,15,16]) can also be considered a form of active data collection. Passive data collection, on the other hand, means that sensor data from the smartphone is collected and sent periodically without the participant knowing that data was collected at any given time. There are various sensors that can be used in a smartphone: multiple location-based sensors (GPS, gyroscopes), accelerometers, audio sensors, Bluetooth radios, Wi-Fi antennas, and with the advancement of technology, many other sensors–such as pulse or blood pressure sensors. In the field of healthcare, such passive data collection is becoming the main solution for health monitoring in the elderly or in other special scenarios [17,18,19]. Obviously, such data collection approaches can be combined to provide instant data linkages [20], which can then be used to provide even richer information-e.g., pulse measurements while conducting surveys and answering questions can validate responses. In order to conduct such data collection in a legally and ethically acceptable manner, informed consent must be given by participants for every aspect of the data collection. With the inclusion of the GDPR, there are clear requirements for recording participants’ consents and handling their data, the key feature being that they can withdraw their consent at any time during their participation in an experiment. For smartphone apps, the default requirement is that access to data and sensor information must be explicitly permitted by the user of the device. However, this consent does not apply to the sharing of data with third parties
Attitudes towards Participation in a Passive Data Collection Experiment
93
- in this case, the researching institution. The participants must give their explicit consent for their data to be collected and transferred from their device to a location unknown to them. Similarly, the researching institution must ensure proper handling of the data and is responsible to the participants for the security of their data. Several studies have found that people are generally reluctant to share their data when it comes to some form of passive data collection [21,22,23], mostly due to privacy concerns. However, people who frequently use their smartphones are less likely to have such concerns [23]. Over the past decade, the amount of data collected by various organizations has increased dramatically. This includes companies with whom users share their data with their consent [24], but they are probably unaware of the amount of data they are sharing and how it is exactly exploited for commercial purposes. Several studies have found that people are much more likely to share data when they are actively engaged in the process (e.g., sending surveys, taking photos, etc.) than when they passively share sensor data [15,23]. This lower participation rate is influenced by numerous factors, so people’s willingness to share data is itself an interesting research question.
Willingness to Share Data As detailed as such data can be, participation rates in such experiments show diverse results, but generally, they are rather low when it comes to passive data collection. In what follows, we will refer to the participation rate as “willingness to participate” or WTP, a commonly used abbreviation in this context. We have collected benchmark data from relevant articles studying WTP in various passive data collection scenarios. As Table 1. shows, WTP is mostly below 50%, both for cases where passive data collection is complemented by a survey and for cases where it is not. Although not evident from this summary, the presence of controlled active data collection had a positive effect on participation, only Bricka et al. [25] conducted an experiment comparatively analysing the presence of active data collection.
94
Advanced Techniques for Collecting Statistical Data
Table 1: The ratio of willingness to share data in selected studies Study
Passive Data Collection Contents
Willingness to Participate (WTP)
Note
Biler et al. (2013) [26]
GPS data
8%
Kretuer et al. (2019) [21]
mobile phone network quality, location, interaction history, characteristics of the social network, activity data, smartphone usage
15.95%
Toepoel and Lugtig (2014) [27]
GPS data
26%
one-time, after a survey
Bricka et al. (2009) [25]
GPS with survey GPS only
30–73% 12–27%
in this study, the participants would fill multiple surveys
Pinter (2015) [28]
Location
42%
this was only claimed willingness not actual downloads of an application
Revilla et al. (2016) [22]
GPS data
17.01–36.84%
this value is the min-max willingness rate of mobile/ tablet users from the following countries: Argentina, Brazil, Chile, Colombia, Spain, Mexico, Portugal
Revilla et al. (2017) [29]
Web activity
30–50%
Scherpenzeel (2017) [30]
Location (GPS, Wi-Fi, cell)
81%
Wenz (2019) [23]
GPS Usage
39% 28%
Most of these studies required the participant to provide location information when filling out a questionnaire, sometimes just a snippet of it. Yet willingness to share this information is particularly low. This result is perplexing considering that most smartphone users share their location data with other apps (often not even in the context of providing location information). Google services, shopping apps are a typical example of location data users.
Attitudes towards Participation in a Passive Data Collection Experiment
95
The only outlier in this table is the study reported by Scherpenzeel [30], where the participation rate is suspiciously high. The participants in this study were panellists who had already completed a larger survey panel for the institution, so there was neither a trust barrier to overcome nor an increased participation burden. Mulder and de Bruijne [15] went deeper in their study and surveyed behaviour on a 7-point scale (1-very unlikely to participate; 7-very likely to participate) for different data collection types. In their sample, the mean willingness to participate in passive data collection was 2.2, indicating a very low willingness of respondents to participate. In the same study, they found a mean of 4.15 for participating in a traditional PAPI survey study and 3.62 for completing the survey via an app. Thus, the difference between the different ways of completing the survey was not large, but the inclusion of passive data collection had a strong negative impact. Given the participation rates for regular surveys in general, these even lower numbers are not very surprising. However, to conduct a successful experiment with an acceptable participation rate, it is important to collect the causes that lower the participation rate. In the following, we will look at some factors that have been analysed in different studies.
Importance of Institutional Trust Trust in the institution collecting the data was found to be a key factor in the willingness to share data [21,31,32]. Several studies have examined the researching institution’s role on willingness to share passive data. Participants’ main concern regarding data collection is the privacy of their data. It is important to emphasise that a brief indication that the data will not be shared with third parties does not really generate trust among the users of an application, but rather the provider of the application influences it. Keusch et al. found that people are about twice as likely to trust research institutions not to share their sensitive data [21]. They measured WTP using an 11-point scale in a survey of panellists. By halving this scale to obtain a dichotomous WTP variable, they found that WTP was similar for all three types of institutions (ranging from 33.1% to 36-9%). However, in their further analysis, they found that WTP was significantly higher in the case of universities and statistical offices than market research firms. Note, however, that in this study no participants downloaded an app, these results only show theoretical readiness.
96
Advanced Techniques for Collecting Statistical Data
Struminskaya et al. found similar results in their study [32], where they tested hypotheses comparing WTP for universities, statistical offices, and market research companies. They found that the WTP reported by respondents is significantly highest for universities, followed by statistical offices and finally for market research firms. A practical result for this factor can be found in the study by Kreuter et al. [18], where participants were asked to download a research app sponsored by a research institution on their phone. In their study, they found a WTP of 15.9%, which was the app download rate.
Control Over the Data Connection Passive data collection poses some risk to the user due to the lack of control over the data collection. Here, we consider the ability to temporarily suspend passive data collection from the app as “control” over data collection. Of course, it is possible to prevent an app from collecting data (disabling location services or turning off the mobile device’s Wi-Fi antenna), but here we refer to the case where the experimental app provides in-built ability by design to suspend data collection. The best recent example of such an application was provided by Haas et al., where subjects could individually choose which data the application should collect [33]. In their application, users had to give their consent to individual properties that the application could record: Network quality and location information; interaction history; social network properties; activity data, smartphone usage (Figure 13.2/a in [33]). They found that only a very small percentage (20%) of participants changed these options after installing the app and only 7% disabled a data sharing feature. In their study, Keusch et al. specifically asked about WTP when the participant was able to temporarily suspend the data collection component of the app altogether and found a positive correlation [21]. This differs from the total control that Haas et al. (2020) offered participants, as they allowed the “level” of sharing to be adjusted rather than turned on and off. [33] Another form of control, the ability to review and change recorded data, was present in Struminskaya et al.’s survey. The corresponding indication in their survey was rather vague, and probably that is why they did not find a significant effect for it [32].
Attitudes towards Participation in a Passive Data Collection Experiment
97
Incentives Another way to improve WTP is to provide monetary incentives for participation. Haas et al. focused their analysis on different types of incentives paid at different points in an experiment [33]. Incentives can be given in different time frames for different activities of the participants. In terms of time frame, it is common to offer incentives for installing a research application, at the end of the survey, but it is also possible to offer recurring incentives. Another option for incentives is to offer them based on “tiers” of how much data participants are willing to provide. In their study, Haas et al. also examined the impact of incentives on the installation and rewarded sharing of various sensor data. There was a positive effect of initial incentives, but interestingly, they did not find the expected positive effect of incentives on granting access to more data-sharing functions. Another interesting finding was that a higher overall incentive did not increase participants’ willingness to have the application installed over a longer experimental period. In addition to these findings, their overall conclusion was that the effects of incentives improve participation similar to regular survey studies. The results of Keusch et al. also support this finding [21].
Other Factors Keusch et al. [21] found that a shorter experimental period (one month as opposed to six months) and monetary incentives increased willingness to participate in a study. As another incentive, Struminskaya et al. [32] found that actual interest in the research topic (participants can receive feedback on research findings) is also a positive factor for increased level of participation. Finally, participants’ limited ability to use devices was also found to be a factor in the study by Wenz et al. They found that individuals who rated their own usage abilities as below average (below 3 on a 5-point scale) showed a significantly lower willingness to participate, especially in passive data collection tasks. [23] On the other hand, those who reported advanced phone use skills were much more willing to participate in such tasks. Although not necessarily related to age, Mulder and Bruinje found in another study that willingness to participate decreased dramatically after age 50 [15]. These results indicate that usability is important when designing a research application.
98
Advanced Techniques for Collecting Statistical Data
As these results show, there are many details to analyse when designing an experiment that relies on passive data collection. Some of the studies used surveys to uncover various latent characteristics that influence willingness to participate, while others conducted a working research application to share practical usage information. Given that many studies reported low WTP scores, we concluded that it is very important to conduct a preliminary study before elaborating the final design of such an experiment. Therefore, the goal of this work is to figure out how we can implement a research tool that motivates participation in the study and still collect a useful amount of information.
METHODS AND DESIGN To collect the information on WTP needed to design and fine-tune our research ecosystem and its user interface components, we decided to conduct an online vignette survey using a representative sample of smartphone users in Hungary. In this section, we first formulate our research hypotheses and then present our methods and models for the hypotheses.
Research Questions Because the focus of our study is exploratory in nature, we did not formulate explicit research hypotheses, but designed our models and the survey to be able to answer the following questions: • Q1. What is the general level of WTP in a passive data collection study? In order to have a single benchmark data and provide comparison with similar studies we asked a simple question whether respondents would be willing to participate or not in a study that is built on smartphone based passive data collection. • Q2. What features of the research design would motivate people to participate in the study? We included several questions in our survey that address key features of the study: the type of institute conducting the experiment, type of data collected, length of the study, monetary incentives, and control over data collection. We wanted to know which of these features should be emphasized to maximize WTP.
Attitudes towards Participation in a Passive Data Collection Experiment
99
• Q3. What kind of demographic attributes influence WTP? As mentioned in previous studies, age may be an important factor for participation in our study, but we also considered other characteristics, such as gender, education, type of settlement, and geographic region of residence. • Q4. What is the role of trust-, skills-, and privacy related contextual factors on WTP? As previous results suggest, trust, previous (negative) experiences and privacy concerns might be key issues in how people react to various data collection techniques. We used composite indicators to measure the effect of interpersonal and institutional trust, smartphone skills and usage, and general concerns over active and passive data collection methods on WTP.
Survey and Sample Details Data collection for this study was conducted by a market research firm using its online access panel of 150,000 registered users. The sample is representative of Hungarian Internet users in terms of gender, age, education level, type of settlement and geographical region. The online data collection ran from 9 to 20 June 2021. The average time to complete the survey was 15 min. Basic descriptive statistics of the sample are shown in Table A1 in Appendix A. Apart from a few single items, the survey consisted of thematic blocks of multiple-choice or Likert-scale questions. Among others, we asked respondents about interpersonal and institutional trust, general smartphone use habits, and concerns about various active and passive digital data collection techniques using smartphones. The items on trust in the survey were adapted from the European Social Survey (ESS), so they are well-tested and have been used for a long time. With the exception of the last block of the questionnaire, all questions were the same for all respondents. In the last block, a special factorial survey technique was used to ask questions about willingness to participate in a hypothetical smartphone based passive data collection study [34,35]. The factorial survey included situations, called “vignettes”, in which several dimensions of the situation are varied. The vignettes described situations of a hypothetical data collection study and respondents had to
Advanced Techniques for Collecting Statistical Data
100
decide how likely they would be willing to participate. An example of a vignette is shown in Box 1, with the varying dimensions of the situations underlined, and the exact wording of the outcome variable (WTP). Box 1. An example of a vignette.
Six dimensions were varied in the vignettes with the following values: •
The organizer of the research: (1) decision-makers, (2) a private company, (3) scientific research institute. • Data collected: (1) spatial movement, (2) mobile usage, (3) communication habits, (4) spatial movement & mobile usage, (5) spatial movement & communication habits, (6) mobile usage & communication habits, (7) all three. • Length of the research: (1) one month, (2) six months. • Incentive: (1) HUF 5000 after installing the application, (2) HUF 5000 after the completion of the study, (3) HUF 5000 after installing the application and HUF 5000 after the completion of the study. • Interruption and control: (1) user cannot interrupt the data collection, (2) user can temporarily interrupt the data collection, (3) user can temporarily interrupt the data collection and review the data and authorize its transfer. Following Jasso, the creation of the vignettes proceeded as follows [35]: First, we created a “universe” in which all combinations of the dimensions described above were present, which included 378 different situations. From these 378 situations, we randomly selected 150 and assigned them, also randomly, to 15 different vignette blocks, which we call decks. Here, each deck included 10 different vignettes and an additional control vignette to
Attitudes towards Participation in a Passive Data Collection Experiment
101
test the internal validity of the experiment. The content of this last vignette was the same as a randomly selected vignette from the first nine items previously evaluated. The results show a high degree (64%) of consistency between responses to the same two vignettes, suggesting a satisfactory level of internal validity (see Appendix B for details on the analysis of this test). In this manner, each respondent completed one randomly assigned deck with 10 + 1 vignettes. In total, 11,000 vignettes were answered by 1000 participants. (Data from the 11th vignette were excluded from the analysis). The descriptive statistics of the vignette dimensions are presented in Table 2. Table 2: The descriptive statistics of the dimensions of vignettes Organizer of the research decision-makers
private company
scientific research institute
35.3%
34.2%
30.4%
Data collected spatial movement
mobile usage
communica- spatial move- spatial move- mobile usage & tion habits ment & mobile ment & com- communication usage munication habits habits
all three
12.3%
15.8%
13.5%
13.8%
18.2%
11.3%
15.1%
Length of the research one month
six months
48.1%
51.9%
Incentive HUF 5000 after inHUF 5000 after the completion of the stalling the application research
HUF 5000 after installing the application and HUF 5000 after the completion of the research
33.7%
33.6%
32.6%
Interruption and control user cannot interrupt the data collection
user can interrupt the data collection
user can interrupt the data collection and has control over their data
34.2%
31.7%
34.2%
Willingness to participate min.
max.
mean
standard deviation
0
10
4.50
3.65
Notes: the vignette level data, N = 10,000. This technique allowed us to combine the advantages of large surveys with the advantages of experiments. Due to the large sample size, the analysis has strong statistical power, and we can also dissociate the effects
102
Advanced Techniques for Collecting Statistical Data
of different stimuli (dimensions) using multilevel analysis. [36,37] Thus, we can examine the effect of multiple variables on the outcome variable (WTP measured on a 0 to 10 scale). In addition to vignette-level variables, we also included respondent level variables in the model to examine how individual characteristics influence the effects of vignette dimensions on participation. We included both respondent-level sociodemographic variables and attitudinal variables in the model. The sociodemographic variables were gender (coded as males and females); age; education with four categories (primary school or lower, vocational, high school, college); place of residence with the type of settlement (capital city, county seat, town, village); and the seven major regions of Hungary (Central Hungary, Northern Hungary, Northern Great Plain, Southern Great Plain, Southern Transdanubia, Central Transdanubia, and Western Transdanubia). The attitudinal variables we used in the models were the following: For how many types of activities does the respondent use their smartphone. We queried 15 different activities (see Table A2 in Appendix A for the full list of activities) and simply counted the activities for which the respondent actively uses his or her smartphone. The personal trust variable shows the average of responses for three trust-related items (see Table A3 in Appendix A for details) measured on a scale from 0 to 10, where 0 represents complete distrust and 10 represents complete trust. We performed the same calculation for trust in institutions. We listed several institutions (see Table A4 in Appendix A for the full list) and asked respondents to indicate their level of trust on a scale of 0 to 10, where “0” means they do not trust the institution at all and “10” means they trust it completely. We also included several digital data collections techniques and asked respondents how concerned they would be about sharing such information for scientific research, emphasizing that their data would only be used anonymously and in aggregated format without storing their personal information. The response options were 1 to 4, with 1 meaning “would be completely concerned” and 4 meaning “would not be concerned at all.” In total, we asked about 18 different active and passive data collections (see Table A5 in Appendix A for the full list of items), from which we formed two separate indices: 6 items measured active, and another 12 items measured passive data collection techniques. For both composite indicators, we counted scores of 1 and 2 (i.e., those more likely to indicate concern).
Attitudes towards Participation in a Passive Data Collection Experiment
103
For the statistical proof of the indices’ internal consistency, we performed Cronbach’s alpha tests, which proved to be acceptable in each case. In addition to the sociodemographic variables and the composite indices, we added two other variables: the time respondents spend online and use their smartphones (in minutes) on an average day.
Analysis and Models The analysis could be divided into four parts. First, we simply checked for the descriptive results of the benchmark variable showing the general level of willingness to participate in a smartphone-based passive data collection study. In the next step we constructed variance component models, to understand the direct effect of the decks by calculating the total variance in the vignette outcome that is explained by respondent characteristics vs. deck of vignettes. In the second part of the analysis, we created three linear regression models. These models were multilevel models because the analyses were conducted at the vignette level, but each set of 10 vignettes was completed by the same subject. Thus, the assumption about observational independence— which is required in the case of general linear regression—was not made. To control for these dependencies, we used multilevel mixed models. In the first model, we included only the independent variables at the vignette level. Then, in a second step, we added respondents’ sociodemographic characteristics, as we assumed that these influence respondents’ willingness to participate. In a third step, we additionally included composite indices of the attitudinal variables at the respondent level. In the final step of the analysis, we added cross-level interaction terms to the model to examine how vignette-level dimensions are varied by respondent-level characteristics.
RESULTS AND DISCUSSION In general, 50 percent of respondents would participate in a study that includes the passive collection and sharing of data from the respondents’ smartphones. The online access panel that was used for this survey includes panellists who from time to time are taking part in active data collection (i.e., filling out online surveys through their PCs, laptops or smartphones), so they presumably comprise a rather active and more motivated segment of
104
Advanced Techniques for Collecting Statistical Data
the Hungarian internet users. But one in two of them seems to be open for passive data collection as well. (Q1) The vignettes used in the survey was designed to understand the internal motives and factors behind that shape the level of willingness. In the first step of this analysis, we built two only intercept models, in which the dependent variable was the outcome and there were no independent variables, but the control for the level of decks and the level of respondents. Based on the estimates of covariance parameters, we could conclude on the ratio of explained variance by the different levels. The variance component models revealed that 77.6 percent of the total variance in the vignette outcome is explained by respondent characteristics and 1.4 percent is explained by the deck of vignettes. Thus, the effect of the decks (the design of the vignette study) is quite small. We then created three multilevel regression models. (Table 3). In the first one (Model 1), we included only the independent variables at the vignette level. Then, in a second step (Model 2), we added the socio-demographic characteristics of the respondents, as we assumed that they influence the respondents’ willingness to participate. In a third step (Model 3), we also included composite indices of respondent-level attitudinal variables. Table 3 shows the results of the three models. Table 3: Multilevel Regression Models on the Willingness to Participate h Intercept
Dependent Variable: Willingness to Participate Model 1
Model 2
Model 3
5.85 *
8.12 *
6.94 *
(0.13)
(0.82)
(1.04)
0.15 *
0.15 *
0.22 *
(0.04)
(0.04)
(0.04)
0.29 *
0.29 *
0.36 *
(0.04)
(0.04)
(0.05)
0.09
0.09
0.05
(0.07)
(0.07)
(0.07)
−0.11
−0.11
−0.14 *
Vignette level variables Organizer of the research (ref: decision makers) Private company Scientific research institute Data collected (ref: all three) Spatial movement Mobile usage
Attitudes towards Participation in a Passive Data Collection Experiment (0.06)
(0.06)
(0.07)
−0.02
−0.01
−0.09
(0.07)
(0.07)
(0.07)
−0.02
−0.02
−0.06
(0.06)
(0.06)
(0.07)
0.03
0.03
−0.05
(0.07)
(0.07)
(0.08)
0.00
0.00
−0.02
(0.06)
(0.06)
(0.07)
Length of the research (Ref: one month)
−0.68 *
−0.68 *
−0.74 *
Incentive (ref: after downloading the app & after the end of the research)
(0.03)
(0.03)
(0.04)
After downloading the app
−0.43 *
−0.43 *
−0.44 *
(0.04)
(0.04)
(0.05)
−0.47 *
−0.47 *
−0.49 *
(0.04)
(0.04)
(0.05)
−0.13 *
−0.13 *
−0.17 *
(0.04)
(0.04)
(0.05)
−0.59 *
−0.59 *
−0.64 *
(0.04)
(0.04)
(0.04)
−0.30
−0.16
(0.22)
(0.22)
−0.04 *
−0.02 *
(0.01)
(0.01)
−0.24
−0.20
(0.13)
(0.14)
0.13
−0.02
(0.12)
(0.12)
0.07
0.29
(0.41)
(0.43)
0.08
0.52
Communication habits Movement & usage Movement & communication Usage & communication
After the end of the research Interruption and control (Ref: user can interrupt the data collection and has control over their data) User can interrupt the data collection User cannot interrupt the data collection Respondent level socio-demographic variables Gender (Ref: men) Age (+: older) Education (+: higher) Type of settlement (+: smaller) Region (Ref: Western Transdanubia) Central Hungary Northern Hungary
105
106
Advanced Techniques for Collecting Statistical Data
Northern Great Plane Southern Great Plane Southern Transdanubia Central Transdanubia
(0.47)
(0.48)
0.02
0.46
(0.45)
(0.48)
0.44
0.62
(0.46)
(0.48)
0.28
0.35
(0.48)
(0.52)
0.72
0.84
(0.47)
(0.50)
Respondent level attitude indices 0.10 *
Smartphone activities (+: multiple)
(0.04) Personal trust (+: high)
0.07 (0.05)
Institutional trust (+: high)
0.12 (0.07)
Time spent online on an average day (minutes)
0.00 (0.00)
Time spent using their smart phone on an average day (minutes)
0.00 (0.00)
Number of active data collection mentioned as rather worrying
−0.06 (0.09)
Number of passive data collection mentioned as rather worrying
−0.20 * (0.04)
AIC
44,997.2
44,977.2
37,066.5
BIC
45,011.8
44,991.8
37,080.7
Observations
100,000
10,000
10,000
Notes: Standard errors in parentheses. * p < 0.001. Results of Model 1 revealed that compared to policymakers, respondents are significantly more likely to participate in research conducted by a private company (with an average of 0.15 points on a scale of 1 to 10) or a scientific research institute (with an average of 0.29 points. People are more willing (with an average of 0.68 points) to participate in a study that lasts only one month-compared to one that lasts six months. And not surprisingly, they
Attitudes towards Participation in a Passive Data Collection Experiment
107
would be more likely to participate in a study if they were paid twice instead of once-by about 0.44 points. The chance of participating is highest if the user can suspend the data collection at any time and view the collected data when needed. The two options of no suspension and suspension but no control over the data showed a lower chance of participation (with an average of 0.59 and 0.13 respectively). Interestingly, there were no significant differences between the purpose of the data collection and thus the type of data collected. Compared to the reference category, where all three types of data are requested, none of the other single types of data collection showed significantly lower or higher level of participation (Q2). We included respondents’ sociodemographic variables in Model 2. Including respondents’ sociodemographic variables did not really change the effect of the vignette dimensions. Interestingly, none of the sociodemographic characteristics have a significant effect on participation, with the exception of age: in accordance with previous research, older individuals are less likely to participate. The probability of participation decreases by 0.04 points with each additional year (Q3). In Model 3, we added respondents’ attitudinal indices to the model. The addition of the respondent-level attitudinal variables did not really change the effects of the variables compared to the previous models. Of the attitudinal variables, none of the trust indices appear to have a significant effect, however smartphone use and concerns about passive data collection do change the likelihood of participation. This is because the more activity and the longer time someone uses a smartphone, the more likely they are to participate in such a study. The more types of passive data collection someone has concerns for, the less likely they are to participate (Q4).
Varying Effects of the Vignette Level Variables among Respondents In the next step we tested the vignette-level variables that had a significant effect on willingness to participate to see if their effect differed across respondents. These variables were the length of the study, the organizer of the research, the type of incentive, and the possibility of suspension and control. We set the slope of these variables to random (one at a time, separately in different models) and tested whether they were significant, that is, whether the effects varied across respondents. To achieve convergent models, we transformed some of the vignette-level variables into dummies. The transformation was based on the results of the previous models and
108
Advanced Techniques for Collecting Statistical Data
categorized together those values that showed the same direction of effects. The organizer of the research was coded as: (a) private company or scientific research institute vs. (b) policymakers. The type of incentive was categorized as (a) only one vs. (b) two incentives given. The opportunity of suspension was transformed into (a) no opportunity or there is opportunity vs. (b) opportunity with control over transferred data. The results showed that all of these variables had significant random slopes, so all effects varied between respondents.
Interaction of Vignette- and Respondent Level Variables on the Willingness to Participate We also tested all interaction terms of those vignette variables that had significant random slopes (length of the study, organizer of the research, type of incentive, and possibility of suspension and control) with those respondent level variables that had a significant effect on willingness to participate (age, smartphone use, and concerns about the passive nature of data collection). Of the twelve interactions tested, six proved to be significant. Figure 1 shows the nature of these interactions with the means of the predicted values. For illustration purposes, we divided each ratio-level variable into two categories and used their mean as cut values.
Attitudes towards Participation in a Passive Data Collection Experiment
109
Figure 1: Predicted values of cross-level interactions. (a) Length of study with smartphone usage. (b) Length of study with concerns overs passive data collection. (c) Number of incentives with smartphone usage. (d) Number of incentives with concerns over passive data collection. (e) Interruption and control with age. (f) Interruption and control with smartphone usage.
We can observe that shorter research duration predicts higher probability of participation, but this effect is stronger for those who use their smartphone for more types-compared to those who use it for fewer. (a) Interestingly, the effect of the length of study is stronger among those with fewer concerns about passive types of data collection and weaker among those with more concerns. (b) When we consider the number of incentives, we can see that while two incentives generally increase the odds of participation compared to only one incentive, this effect is stronger among those who use their smartphone for fewer types of activities and weaker among those who use it for more activities. (c) In addition, the effect of the incentive is stronger among those who have fewer concerns about passive data collection and weaker among those who have more concerns. (d) In the original model (Model 3, Table 3), we could see that someone is more likely to participate if they can interrupt the study when they want to and if they can review the data collected about them-compared to simply suspend or even not being able to suspend. Based on the interactions, this effect is stronger for younger individuals than for older individuals. (e) When we account for smartphone usage, we see that the effect of type of suspension disappears for those with lower smartphone usage and persists only for those who use their smartphone for more tasks (f).
110
Advanced Techniques for Collecting Statistical Data
CONCLUSIONS With this study, we aimed to continue the series of analyses examining users’ attitudes toward passive sensors- and other device information-based smartphone data collection. Overall, our results are consistent with findings of previous research: We found evidence that a more trusted survey organiser/client, shorter duration of data collection, multiple incentives, and control over data collection can significantly influence willingness to participate. The results also show that apart from age (as major determinant of digital technology use and attitudes towards digital technologies), demographic characteristics alone do not play an important role. This finding might be biased by the general characteristics of the online panel we used for the survey, but they might come as an important information for future studies that aim for representativeness of the online (smartphone user) population. Contrary to our preliminary expectations, trust in people and institutions alone does not seem to have a notable effect. This is especially noteworthy given the fact that the Hungarian society has generally lower level of personal and institutional trust compared to Western and Northern European countries. However, general attitudes toward technology, the complexity and intensity of smartphone use, and general concerns about passive data collection may be critical in determining who is willing to participate in future research. Asking questions on future behaviour of people in hypothetical situations have obvious limitations. In our case, this means that there is a good chance that we would get different results if we asked people to download an existing, ready-to-test app and to both actively and passively collect real, personal data from users. We were mainly interested in people’s feelings, fears and expectations that determine their future actions, and we suggest that our results provided valid insights. It should also be mentioned that in this research we focused mostly on the dimensions analysed in previous studies and included them in our own analysis. Of course, there are many other important factors that can influence the willingness of users to participate. Our aim was therefore not to provide a complete picture, but to gather important aspects that could enrich our collective knowledge on smartphone based passive data collection and inform our own application development process.
Attitudes towards Participation in a Passive Data Collection Experiment
111
ACKNOWLEDGMENTS We thank János Klenovszki, Bálint Markos and Norbert Sárközi at NRC Ltd. for the professional support they provided us in conducting the online survey. The authors also wish to thank anonymous reviewers for feedback on the manuscript.
Appendix A. Survey Details Table A1: Sample characteristics by sample size, unweighted and weighted sample distribution Unweighted
Sample Distribution (%)
Weighted
Unweighted Sample Size (n)
Male
42.9
48.2
429
Female
57.1
51.8
571
y18–29
19.8
22.8
198
y30–39
21.3
20.1
213
y40–49
23.5
24.2
235
y50–59
17.6
15.2
176
y60–69
14.0
13.9
140
y70+
3.8
3.9
38
Primary
24.7
35.6
247
Secondary
42.3
39.0
423
Tertiary
33.0
25.4
330
Gender
Age
Education
Type of settlement Budapest (capital)
20.7
21.2
207
Towns
54.3
52.1
543
Villages
25.0
26.7
250
Central Hungary
32.4
34.6
324
Northern Hungary
11.1
10.4
111
Northern Great Plain
14.8
13.8
148
Southern Great Plain
14.0
11.7
140
Southern Transdanubia
8.9
8.8
89
Region
112
Advanced Techniques for Collecting Statistical Data Central Transdanubia
9.7
10.7
97
Western Transdanubia
9.1
9.9
91
100%
100%
1000
Total
Table A2: Questions of activities, for which the respondent uses their smartphone 1. Browsing websites 2. Write/read emails 3. Taking photos, videos 4. View content from social networking sites (e.g., texts, images, videos on Facebook, Instagram, Twitter, etc.) 5. Post content to social media sites (e.g., share text, images, videos on Facebook, Instagram, Twitter) 6. Online shopping (e.g., tickets, books, clothes, technical articles) 7. Online banking (e.g., account balance inquiries, transfers) 8. Install new apps (e.g., via Google Play or App Store) 9. Use apps that use the device’s location (e.g., Google Maps, Foursquare) 10. Connect devices to your device via Bluetooth (e.g., smart watch, pedometer) 11. Game 12. Listening to music, watching videos 13. Recording of training data (e.g., while running, number of steps per day, etc.) 14. Reading and editing files related to work and study 15. Voice assistant services (Google Assistant, Apple Siri, Amazon Alexa, etc.)
Table A3: The three items in the questionnaire measuring personal trust 1. In general, what would you say? Can most people be trusted, or rather that we cannot be careful enough in human relationships? Put your opinion on a scale where “0” means we can’t be careful enough and “10” means that most people are trustworthy. 2. Do you think that most people would try to take advantage of you if they had the opportunity or try to be fair? Place your opinion on a scale where “0” means that Most people would try to take advantage and “10” means that most people would try to be fair. 3. Do you think people tend to care only about themselves, or are they generally helpful? Place your opinion on a scale where “0” means people care more about themselves and “10” means people tend to be helpful.
Attitudes towards Participation in a Passive Data Collection Experiment
113
Table A4: The list of institutions, about which we asked the respondent how much they trust in them • 1. Hungarian Parliament • 2. Hungarian legal system • 3. Politicians • 4. Police • 5. Scientists • 6. Online stores • 7. Large Internet companies (Apple, Google, Facebook, Microsoft, etc.) • 8. Online news portals
Table A5: Types of active and passive data collection methods Type of Data Method of Data Collection Collection Active
1. Answer some questions via text message (SMS). 2. Answer the questions in a questionnaire in a personal video interview using your smartphone. (Questions will be asked by the interviewer.) 3. Fill out an online questionnaire through an app downloaded to your smartphone. 4. Fill out an online questionnaire through your smartphone’s web browser. 5. Take photos or scan barcodes with your smartphone camera (e.g., photos of recipes or barcodes of products you purchase). 6. While you are watching a research-related video on your phone your camera uses software to examine what emotions appear on your face.
114 Passive
Advanced Techniques for Collecting Statistical Data 7. Allowing the built-in function of your smartphone to measure whether e.g., How much and at what speed you walk run or just bike. 8. Connect a device to your smartphone using a Bluetooth connection (for example to measure your physical activity) 9. Download an app that collects information about how you use your smartphone. 10. How long you use your phone for a day (that is how long your device’s display is on) 11.How many times a day you receive and make calls. (Only the number of calls is recorded no phone numbers!) 12. Number of entries in your phonebook (ie how many phone numbers are stored in your device. Important: specific names and phone numbers will not be removed from your device!) 13. Sharing your smartphone’s geographic coordinates (e.g., how much time you spend in a particular location) 14. The number of applications installed on your phone. 15. The number of male and female names in your phone’s contact list. (Important: specific names and phone numbers will not be removed from your device!) 16. The proportion of foreign phone numbers in your phonebook. (Important: specific names and phone numbers will not be exported from your device!) 17. The time is when you start using your phone in the morning. 18. The time you last use your phone in the evening.
APPENDIX B. INTERNAL VALIDITY TEST OF VIGNETTE RESPONSES There is strong correlation between the responses to the original and the control vignette (r(998) = 0.89, p < 0.001). Overall, 63.8 percent of the vignette responses were the same for the control item and the randomly selected main vignette. Another 9.1 and 8.4 percent of the responses differed by only minus or plus one point. This means that 81.3 percent of the responses could be considered quasi identical for the randomly chosen original and the control vignette.
Attitudes towards Participation in a Passive Data Collection Experiment
115
Figure A1: Comparison of responses for original vignettes vs. control vignette.
AUTHOR CONTRIBUTIONS Conceptualization and methodology: B.S., J.K. and A.G.; formal analysis, J.K., B.S. and A.G.; writing—original draft preparation, review and editing, A.G., J.K. and B.S. All authors have read and agreed to the published version of the manuscript.
116
Advanced Techniques for Collecting Statistical Data
REFERENCES 1.
De Bruijne M., Wijnant A. Comparing Survey Results Obtained via Mobile Devices and Computers: An Experiment with a Mobile Web Survey on a Heterogeneous Group of Mobile Devices Versus a Computer-Assisted Web Survey. Soc. Sci. Comput. Rev. 2013;31:482– 504. doi: 10.1177/0894439313483976. 2. De Bruijne M., Wijnant A. Mobile Response in Web Panels. Soc. Sci. Comput. Rev. 2014;32:728–742. doi: 10.1177/0894439314525918. 3. Couper M.P., Antoun C., Mavletova A. Total Survey Error in Practice. John Wiley & Sons, Ltd.; Hoboken, NJ, USA: 2017. Mobile Web Surveys; pp. 133–154. 4. Couper M.P. New Developments in Survey Data Collection. Annu. Rev. Sociol. 2017;43:121–145. doi: 10.1146/annurev-soc-060116-053613. 5. Brenner P.S., DeLamater J. Lies, Damned Lies, and Survey SelfReports? Identity as a Cause of Measurement Bias. Soc. Psychol. Q. 2016;79:333–354. doi: 10.1177/0190272516628298. 6. Brenner P.S., DeLamater J. Measurement Directiveness as a Cause of Response Bias: Evidence From Two Survey Experiments. Sociol. Methods Res. 2014;45:348–371. doi: 10.1177/0049124114558630. 7. Palczyńska M., Rynko M. ICT Skills Measurement in Social Surveys: Can We Trust Self-Reports? Qual. Quant. 2021;55:917–943. doi: 10.1007/s11135-020-01031-4. 8. Tourangeau R., Rips L.J., Rasinski K. The Psychology of Survey Response. Cambridge University Press; Cambridge, UK: 2000. 9. Link M.W., Murphy J., Schober M.F., Buskirk T.D., Childs J.H., Tesfaye C.L. Mobile Technologies for Conducting, Augmenting and Potentially Replacing Surveys: Report of the AAPOR Task Force on Emerging Technologies in Public Opinion Research. Public Opin. Q. 2014;78:779–787. doi: 10.1093/poq/nfu054. 10. Karsai M., Perra N., Vespignani A. Time Varying Networks and the Weakness of Strong Ties. Sci. Rep. 2015;4:4001. doi: 10.1038/ srep04001. 11. Onnela J.-P., Saramäki J., Hyvönen J., Szabó G., Lazer D., Kaski K., Kertész J., Barabási A.-L. Structure and Tie Strengths in Mobile Communication Networks. Proc. Natl. Acad. Sci. USA. 2007;104:7332– 7336. doi: 10.1073/pnas.0610245104.
Attitudes towards Participation in a Passive Data Collection Experiment
117
12. Palmer J.R.B., Espenshade T.J., Bartumeus F., Chung C.Y., Ozgencil N.E., Li K. New Approaches to Human Mobility: Using Mobile Phones for Demographic Research. Demography. 2013;50:1105–1128. doi: 10.1007/s13524-012-0175-z. 13. Miritello G., Moro E., Lara R., Martínez-López R., Belchamber J., Roberts S.G.B., Dunbar R.I.M. Time as a Limited Resource: Communication Strategy in Mobile Phone Networks. Soc. Netw. 2013;35:89–95. doi: 10.1016/j.socnet.2013.01.003. 14. Kreuter F., Presser S., Tourangeau R. Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity. Public Opin. Q. 2008;72:847–865. doi: 10.1093/poq/nfn063. 15. Mulder J., de Bruijne M. Willingness of Online Respondents to Participate in Alternative Modes of Data Collection. Surv. Pract. 2019;12:8356. doi: 10.29115/SP-2019-0001. 16. Scherpenzeel A. Data Collection in a Probability-Based Internet Panel: How the LISS Panel Was Built and How It Can Be Used. BMS Bull. Sociol. Methodol./Bull. Méthodol. Sociol. 2011;109:56–61. doi: 10.1177/0759106310387713. 17. Kołakowska A., Szwoch W., Szwoch M. A Review of Emotion Recognition Methods Based on Data Acquired via Smartphone Sensors. Sensors. 2020;20:6367. doi: 10.3390/s20216367. 18. Kreuter F., Haas G.-C., Keusch F., Bähr S., Trappmann M. Collecting Survey and Smartphone Sensor Data With an App: Opportunities and Challenges Around Privacy and Informed Consent. Soc. Sci. Comput. Rev. 2020;38:533–549. doi: 10.1177/0894439318816389. 19. Struminskaya B., Lugtig P., Keusch F., Höhne J.K. Augmenting Surveys With Data From Sensors and Apps: Opportunities and Challenges. Soc. Sci. Comput. Rev. 2020:089443932097995. doi: 10.1177/0894439320979951. 20. Younis E.M.G., Kanjo E., Chamberlain A. Designing and Evaluating Mobile Self-Reporting Techniques: Crowdsourcing for Citizen Science. Pers. Ubiquitous Comput. 2019;23:329–338. doi: 10.1007/ s00779-019-01207-2. 21. Keusch F., Struminskaya B., Antoun C., Couper M.P., Kreuter F. Willingness to Participate in Passive Mobile Data Collection. Public Opin. Q. 2019;83:210–235. doi: 10.1093/poq/nfz007.
118
Advanced Techniques for Collecting Statistical Data
22. Revilla M., Toninelli D., Ochoa C., Loewe G. Do Online Access Panels Need to Adapt Surveys for Mobile Devices? Internet Res. 2016;26:1209–1227. doi: 10.1108/IntR-02-2015-0032. 23. Wenz A., Jäckle A., Couper M.P. Willingness to Use Mobile Technologies for Data Collection in a Probability Household Panel. Surv. Res. Methods. 2019;13:1–22. doi: 10.18148/SRM/2019. V1I1.7298. 24. Van Dijck J. Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology. Surveill. Soc. 2014;12:197–208. doi: 10.24908/ss.v12i2.4776. 25. Bricka S., Zmud J., Wolf J., Freedman J. Household Travel Surveys with GPS An Experiment. Transp. Res. Rec. J. Transp. Res. Board. 2009;2105:51–56. doi: 10.3141/2105-07. 26. Biler S., Šenk P., Winklerová L. Willingness of Individuals to Participate in a Travel Behavior Survey Using GPS Devices [Stanislav Biler et al.]; Proceedings of the NTTS 2013; Brussels, Belgium. 5–7 March 2013; pp. 1015–1023. 27. Toepoel V., Lugtig P. What Happens If You Offer a Mobile Option to Your Web Panel? Evidence From a Probability-Based Panel of Internet Users. Soc. Sci. Comput. Rev. 2014;32:544–560. doi: 10.1177/0894439313510482. 28. Pinter R. Willingness of Online Access Panel Members to Participate in Smartphone Application-Based Research. Mob. Res. Methods. 2015:141–156. doi: 10.5334/bar.i. 29. Revilla M., Ochoa C., Loewe G. Using Passive Data from a Meter to Complement Survey Data in Order to Study Online Behavior. Soc. Sci. Comput. Rev. 2017;35:521–536. doi: 10.1177/0894439316638457. 30. Scherpenzeel A. Mixing Online Panel Data Collection with Innovative Methods. In: Eifler S., Faulbaum F., editors. Methodische Probleme von Mixed-Mode-Ansätzen in der Umfrageforschung. Springer Fachmedien; Wiesbaden, Germany: 2017. pp. 27–49. Schriftenreihe der ASI—Arbeitsgemeinschaft Sozialwissenschaftlicher Institute. 31. Cabalquinto E., Hutchins B. “It Should Allow Me to Opt in or Opt out”: Investigating Smartphone Use and the Contending Attitudes of Commuters towards Geolocation Data Collection. Telemat. Inform. 2020;51:101403. doi: 10.1016/j.tele.2020.101403.
Attitudes towards Participation in a Passive Data Collection Experiment
119
32. Struminskaya B., Toepoel V., Lugtig P., Haan M., Luiten A., Schouten B. Understanding Willingness to Share Smartphone-Sensor Data. Public Opin. Q. 2021;84:725–759. doi: 10.1093/poq/nfaa044. 33. Haas G., Kreuter F., Keusch F., Trappmann M., Bähr S. Effects of Incentives in Smartphone Data Collection. In: Hill C.A., Biemer P.P., Buskirk T.D., Japec L., Kirchner A., Kolenikov S., Lyberg L.E., editors. Big Data Meets Survey Science. Wiley; Hoboken, NJ, USA: 2020. pp. 387–414. 34. Hox J.J., Kreft I.G.G., Hermkens P.L.J. The Analysis of Factorial Surveys. Sociol. Methods Res. 1991;19:493–510. doi: 10.1177/0049124191019004003. 35. Jasso G. Factorial Survey Methods for Studying Beliefs and Judgments. Sociol. Methods Res. 2006;34:334–423. doi: 10.1177/0049124105283121. 36. Auspurg K., Hinz T. Multifactorial Experiments in Surveys: Conjoint Analysis, Choice Experiments, and Factorial Surveys. In: Keuschnigg M., Wolbring T., editors. Experimente in den Sozialwissenschaften. Nomos; Baden-Baden, Germany: 2015. pp. 291–315. Soziale Welt Sonderband. 37. Wallander L. 25 Years of Factorial Surveys in Sociology: A Review. Soc. Sci. Res. 2009;38:505–520. doi: 10.1016/j.ssresearch.2009.03.004.
Chapter
AN INTEGRATIVE REVIEW ON METHODOLOGICAL CONSIDERATIONS IN MENTAL HEALTH RESEARCH – DESIGN, SAMPLING, DATA COLLECTION PROCEDURE AND QUALITY ASSURANCE
7
Eric Badu1, Anthony Paul O’Brien2, and Rebecca Mitchell3 School of Nursing and Midwifery, The University of Newcastle, Callaghan, Australia
1
Faculty of Health and Medicine, School Nursing and Midwifery, University of Newcastle, Callaghan, Australia
2
Faculty of Business and Economics, Macquarie University, North Ryde, Australia
3
ABSTRACT Background Several typologies and guidelines are available to address the methodological and practical considerations required in mental health research. However, Citation: (APA): (1Badu, E., O’Brien, A. P., & Mitchell, R. (2019). An integrative review on methodological considerations in mental health research–design, sampling, data collection procedure and quality assurance. Archives of Public Health, 77(1), 1-15. (15 pages) Copyright: © This is an open-access article distributed under the terms of a Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/).
122
Advanced Techniques for Collecting Statistical Data
few studies have actually attempted to systematically identify and synthesise these considerations. This paper provides an integrative review that identifies and synthesises the available research evidence on mental health research methodological considerations.
Methods A search of the published literature was conducted using EMBASE, Medline, PsycINFO, CINAHL, Web of Science, and Scopus. The search was limited to papers published in English for the timeframe 2000–2018. Using predefined inclusion and exclusion criteria, three reviewers independently screened the retrieved papers. A data extraction form was used to extract data from the included papers.
Results Of 27 papers meeting the inclusion criteria, 13 focused on qualitative research, 8 mixed methods and 6 papers focused on quantitative methodology. A total of 14 papers targeted global mental health research, with 2 papers each describing studies in Germany, Sweden and China. The review identified several methodological considerations relating to study design, methods, data collection, and quality assurance. Methodological issues regarding the study design included assembling team members, familiarisation and sharing information on the topic, and seeking the contribution of team members. Methodological considerations to facilitate data collection involved adequate preparation prior to fieldwork, appropriateness and adequacy of the sampling and data collection approach, selection of consumers, the social or cultural context, practical and organisational skills; and ethical and sensitivity issues.
Conclusion The evidence confirms that studies on methodological considerations in conducting mental health research largely focus on qualitative studies in a transcultural setting, as well as recommendations derived from multisite surveys. Mental health research should adequately consider the methodological issues around study design, sampling, data collection procedures and quality assurance in order to maintain the quality of data collection. Keywords: Mental health, Methodological approach, Mixed methods, Sampling, Data collection
An Integrative Review on Methodological Considerations in Mental ...
123
BACKGROUND In the past decades there has been considerable attention on research methods to facilitate studies in various academic fields, such as public health, education, humanities, behavioural and social sciences [1–4]. These research methodologies have generally focused on the two major research pillars known as quantitative or qualitative research. In recent years, researchers conducting mental health research appear to be either employing both qualitative and quantitative research methods separately, or mixed methods approaches to triangulate and validate findings [5, 6]. A combination of study designs has been utilised to answer research questions associated with mental health services and consumer outcomes [7, 8]. Study designs in the public health and clinical domains, for example, have largely focused on observational studies (non-interventional) and experimental research (interventional) [1, 3, 9]. Observational design in non-interventional research requires the investigator to simply observe, record, classify, count and analyse the data [1, 2, 10]. This design is different from the observational approaches used in social science research, which may involve observing (participant and non- participant) phenomena in the fieldwork [1]. Furthermore, the observational study has been categorized into five types, namely cross-sectional design, case-control studies, cohort studies, case report and case series studies [1–3, 9–11]. The cross-sectional design is used to measure the occurrence of a condition at a one-time point, sometimes referred to as a prevalence study. This approach of conducting research is relatively quick and easy but does not permit a distinction between cause and effect [1]. Conversely, the case-control is a design that examines the relationship between an attribute and a disease by comparing those with and without the disease [1, 2, 12]. In addition, the case-control design is usually retrospective and aims to identify predictors of a particular outcome. This type of design is relevant when investigating rare or chronic diseases which may result from long-term exposure to particular risk factors [10]. Cohort studies measure the relationship between exposure to a factor and the probability of the occurrence of a disease [1, 10]. In a case series design, medical records are reviewed for exposure to determinants of disease and outcomes. More importantly, case series and case reports are often used as preliminary research to provide information on key clinical issues [12]. The interventional study design describes a research approach that applies clinical care to evaluate treatment effects on outcomes [13]. Several previous studies have explained the various forms of experimental study design used in public health and clinical research [14, 15]. In particular,
124
Advanced Techniques for Collecting Statistical Data
experimental studies have been categorized into randomized controlled trials (RCTs), non-randomized controlled trials, and quasi-experimental designs [14]. The randomized trial is a comparative study where participants are randomly assigned to one of two groups. This research examines a comparison between a group receiving treatment and a control group receiving treatment as usual or receiving a placebo. Herein, the exposure to the intervention is determined by random allocation [16, 17]. Recently, research methodologists have given considerable attention to the development of methodologies to conduct research in vulnerable populations. Vulnerable population research, such as with mental health consumers often involves considering the challenges associated with sampling (selecting marginalized participants), collecting data and analysing it, as well as research engagement. Consequently, several empirical studies have been undertaken to document the methodological issues and challenges in research involving marginalized populations. In particular, these studies largely addresses the typologies and practical guidelines for conducting empirical studies in mental health. Despite the increasing evidence, however, only a few studies have yet attempted to systematically identify and synthesise the methodological considerations in conducting mental health research from the perspective of consumers. A preliminary search using the search engines Medline, Web of Science, Google Scholar, and Scopus Index and EMBASE identified only two reviews of mental health based research. Among these two papers, one focused on the various types of mixed methods used in mental health research [18], whilst the other paper, focused on the role of qualitative studies in mental health research involving mixed methods [19]. Even though the latter two studies attempted to systematically review mixed methods mental health research, this integrative review is unique, as it collectively synthesises the design, data collection, sampling, and quality assurance issues together, which has not been previously attempted. This paper provides an integrative review addressing the available evidence on mental health research methodological considerations. The paper also synthesises evidence on the methods, study designs, data collection procedures, analyses and quality assurance measures. Identifying and synthesising evidence on the conduct of mental health research has relevance to clinicians and academic researchers where the evidence provides a guide regarding the methodological issues involved when conducting research in the mental health domain. Additionally, the synthesis
An Integrative Review on Methodological Considerations in Mental ...
125
can inform clinicians and academia about the gaps in the literature related to methodological considerations.
METHODS Methodology An integrative review was conducted to synthesise the available evidence on mental health research methodological considerations. To guide the review, the World Health Organization (WHO) definition of mental health has been utilised. The WHO defines mental health as: “a state of well-being, in which the individual realises his or her own potentials, ability to cope with the normal stresses of life, functionality and work productivity, as well as the ability to contribute effectively in community life” [20]. The integrative review enabled the simultaneous inclusion of diverse methodologies (i.e., experimental and non-experimental research) and varied perspectives to fully understand a phenomenon of concern [21, 22]. The review also uses diverse data sources to develop a holistic understanding of methodological considerations in mental health research. The methodology employed involves five stages: 1) problem identification (ensuring that the research question and purpose are clearly defined); 2) literature search (incorporating a comprehensive search strategy); 3) data evaluation; 4) data analysis (data reduction, display, comparison and conclusions) and; 5) presentation (synthesising findings in a model or theory and describing the implications for practice, policy and further research) [21].
Inclusion Criteria The integrative review focused on methodological issues in mental health research. This included core areas such as study design and methods, particularly qualitative, quantitative or both. The review targeted papers that addressed study design, sampling, data collection procedures, quality assurance and the data analysis process. More specifically, the included papers addressed methodological issues on empirical studies in mental health research. The methodological issues in this context are not limited to a particular mental illness. Studies that met the inclusion criteria were peerreviewed articles published in the English Language, from January 2000 to July 2018.
126
Advanced Techniques for Collecting Statistical Data
Exclusion Criteria Articles that were excluded were based purely on general health services or clinical effectiveness of a particular intervention with no connection to mental health research. Articles were also excluded when it addresses nonmethodological issues. Other general exclusion criteria were book chapters, conference abstracts, papers that present opinion, editorials, commentaries and clinical case reviews.
Search Strategy and Selection Procedure The search of published articles was conducted from six electronic databases, namely EMBASE, CINAHL (EBSCO), Web of Science, Scopus, PsycINFO and Medline. We developed a search strategy based on the recommended guidelines by the Joanna Briggs Institute (JBI) [23]. Specifically, a threestep search strategy was utilised to conduct the search for information (see Table 1). An initial limited search was conducted in Medline and Embase (see Table 1). We analysed the text words contained in the title and abstract and of the index terms from the initial search results [23]. A second search using all identified keywords and index terms was then repeated across all remaining five databases (see Table 1). Finally, the reference lists of all eligible studies were manually hand searched [23]. Table 1: Search strategy and selection procedure Stages
Search terms and keywords
Stage 1 (Initial search in MEDLINE and EMBASE
(“mental health” OR mental health service OR “psychiatric services” OR mental disorders OR mental illness) AND (“methods” or “research designs” or “data collection” or “data analysis” OR “sampling” or “sample size” OR “mixed methods”) AND (“quality assurance” OR “reliability” OR “validity” OR “techniques” OR “strategies” OR research design OR “informed consent”)
Stage 2 (search across CINAHL, Web of Science, Scopus, and PsycINFO)
(“psychiatry” OR “mental health” OR “mental disorders” OR “mental patient” OR “mental illness” OR “mental treatment” OR “consumer”) AND (“research methods” OR “methodology” OR “research designs” OR “qualitative research” OR “quantitative research” OR “mixed methods” OR “biomedical research” OR “health service research” OR “epidemiologic methods” OR “behavioural research” OR “process design”) AND (“sampling” OR “sample size” OR “patient selection” OR “surveys” OR “questionnaires” OR “interviews” OR “data analysis” OR “content analysis” OR “thematic analysis” OR “reporting”) AND (“informed consent” “reliability” OR “quality assurance” OR “validity” OR “techniques” OR “strategies” OR “process”)
Stage 3
Hand searching of the reference lists
An Integrative Review on Methodological Considerations in Mental ...
127
The selection of eligible articles adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [24] (see Fig. 1). Firstly, three authors independently screened the titles of articles that were retrieved and then approved those meeting the selection criteria. The authors reviewed all the titles and abstracts and agreed on those needing full-text screening. E.B (Eric Badu) conducted the initial screening of titles and abstracts. A.P.O’B (Anthony Paul O’Brien) and R.M (Rebecca Mitchell) conducted the second screening of titles and abstracts of all the identified papers. The authors (E.B, A.P.O’B and R.M) conducted full-text screening according to the inclusion and exclusion criteria.
Figure 1: Flow Chart of studies included in the review.
Data Management and Extraction The integrative review used Endnote ×8 to screen and handle duplicate references. A predefined data extraction form was developed to extract data from all included articles (see Additional file 1). The data extraction form was developed according to Joanna Briggs Institute (JBI) [23] and Cochrane [24] manuals, as well as the literature associated with concepts and methods in mental health research. The data extraction form was
128
Advanced Techniques for Collecting Statistical Data
categorised into sub-sections, such as study details (citation, year of publication, author, contact details of lead author, and funder/sponsoring organisation, publication type), objective of the paper, primary subject area of the paper (study design, methods, sampling, data collection, data analysis, quality assurance). The data extraction form also had a section on additional information on methodological consideration, recommendations and other potential references. The authors extracted results of the included papers in numerical and textual format [23]. EB (Eric Badu) conducted the data extraction, A.P.O’B (Anthony Paul O’Brien) and R.M (Rebecca Mitchell), conducted the second review of the extracted data.
Data Synthesis Content analysis was used to synthesise the extracted data. The content analysis process involved several stages which involved noting patterns and themes, seeing plausibility, clustering, counting, making contrasts and comparisons, discerning common and unusual patterns, subsuming particulars into general, noting relations between variability, finding intervening factors and building a logical chain of evidence [21] (see Table 2). Table 2: The key emerging themes Theme
Sub-theme
Na
Papers
Mixed methods design in mental health research
Categorizing mixed methods
4
(19) (18) (43) (48)
Function of mixed methods
6
(45) (42) (48) (19) (18) (43)
Structure of mixed methods
5
(43) (19) (18) (42) (48)
Process of mixed methods
5
(48) (43) (42) (19) (18)
Consideration for using mixed 3 methods
(19) (18) (45)
Qualitative study in mental health research
Considering qualitative methods
6
(32) (36) (19) (26) (28) (44)
Sampling in mental health research
Sampling approaches (quantitative)
3
(35) (34) (25)
Sampling approaches (qualitative)
7
(28) (32) (46) (19) (42) (30) (31)
Sampling consideration
4
(30) (31) (32) (46)
An Integrative Review on Methodological Considerations in Mental ... Data collection in mental health research
Quality assurance procedures
Approaches for collecting qualitative data
9
(28) (41) (30) (31) (44) (47) (19) (40) (34)
Consideration for data collection
6
(32) (37) (31) (41) (49) (47)
Preparing for data collection
8
(25) (33) (34) (35) (39) (41) (49) (30)
Seeking informed consent
7
(25) (26) (33) (35) (37) (39) (47)
Procedure for ensuring quality 5 control (quantitative)
(49) (25) (39) (33) (38)
Procedure for ensuring quality 4 control (qualitative)
(32) (37) (46) (19)
129
Na number of papers
RESULTS Study Characteristics The integrative review identified a total of 491 records from all databases, after which 19 duplicates were removed. Out of this, 472 titles and abstracts were assessed for eligibility, after which 439 articles were excluded. Articles not meeting the inclusion criteria were excluded. Specifically, papers excluded were those that did not address methodological issues as well as papers addressing methodological consideration in other disciplines. A total of 33 full-text articles were assessed – 9 articles were further excluded, whilst an additional 3 articles were identified from reference lists. Overall, 27 articles were included in the final synthesis (see Fig. 1). Of the total included papers, 12 contained qualitative research, 9 were mixed methods (both qualitative and quantitative) and 6 papers focused on quantitative data. Conversely, a total of 14 papers targeted global mental health research and 2 papers each describing studies in Germany, Sweden and China. The papers addressed different methodological issues, such as study design, methods, data collection, and analysis as well as quality assurance (see Table 3).
130
Advanced Techniques for Collecting Statistical Data
Table 3: Study characteristics Author
Setting
Methodological issues addressed
Type of method
Alonso, Angermeyer [25]
Belgium, France, Germany, Italy, the Netherlands and Spain
Sampling, data collection and Quality assurance
Quantitative
Baarnhielm and Ekblad [26]
Sweden
Quality assurance (ethical issues)
Qualitative
Braun and Clarke [27]
Global
Data analysis
Qualitative
Brown and Lloyd [28]
Global
Methods, sampling, data collection and analysis
Qualitative
Davidsen [29]
Global
Data analysis
Qualitative
de Jong and Van Ommeren [30]
Global
Sampling and Data collection
Mixed Methods
Ekblad and Baarnhielm [31]
Sweden
Data collection
Qualitative
Fossey, Harvey [32]
Global
Methods, Sampling, data collection, data analysis and Quality assurance
Qualitative
Jacobi, Wittchen [33]
Germany
Data collection, analysis and Quality assurance
Quantitative
Koch, Vogel [34]
Germany
Sampling, data collection and Quality assurance
Mixed Methods
Korver, Quee [35]
Netherlands
Sampling and Quality assurance
Quantitative
Larkin, Watts [36]
Global
Study design
Qualitative
Latvala, Vuokila-Oikkonen [37]
Finland
Data collection and Quality Qualitative assurance
Leese, White [38]
Europe
Quality assurance
Quantitative
Liu, Huang [39]
China
Data analysis and Quality assurance
Quantitative
Montgomery and Bailey [40]
Canada
Data collection and analysis
Qualitative
Owen [41]
UK
Data collection
Qualitative
Palinkas [19]
Global
Study design, methods, sampling, data collection, analysis and Quality assurance
Mixed Methods
Palinkas, Horwitz [18]
Global
Study design
Mixed Methods
Palinkas, Horwitz [42]
Global
Sampling
Mixed methods
Palinkas, Aarons [43]
Global
Study design
Mixed Methods
An Integrative Review on Methodological Considerations in Mental ...
131
Razafsha, Behforuzi [44]
Global
Methods and data collection
Mixed Methods
Robins, Ware [45]
Global
Study design
Mixed Methods
Robinson [46]
Global
Sampling and Quality assurance
Qualitative
Schilder, Tomov [47]
Bulgaria
Data collection
Qualitative
Schoonenboom and Johnson [48]
Global
Study design
Mixed Methods
Yin, Phillips [49]
China
Data collection
Quantitative
Mixed Methods Design in Mental Health Research Mixed methods research is defined as a research process where the elements of qualitative and quantitative research are combined in the design, data collection, and its triangulation and validation [48]. The integrative review identified four sub-themes that describe mixed methods design in the context of mental health research. The sub-themes include the categories of mixed methods, their function, structure, process and further methodological considerations for mixed methods design. These sub-themes are explained as follows:
Categorizing Mixed Methods in Mental Health Research Four studies highlighted the categories of mixed methods design applicable to mental health research [18, 19, 43, 48]. Generally, there are differences in the categories of mixed methods design, however, three distinct categories predominantly appear to cross cut in all studies. These categories are function, structure and process. Some studies further categorised mixed method design to include rationale, objectives, or purpose. For instance, Schoonenboom and Johnson [48] categorised mixed methods design into primary and secondary dimensions.
The Function of Mixed Methods in Mental Health Research Six studies explain the function of conducting mixed methods design in mental health research. Two studies specifically recommended that mixed methods have the ability to provide a more robust understanding of services by expanding and strengthening the conclusions from the study [42, 45]. More importantly, the use of both qualitative and quantitative methods have the ability to provide innovative solutions to important and complex
132
Advanced Techniques for Collecting Statistical Data
problems, especially by addressing diversity and divergence [48]. The review identified five underlying functions of a mixed method design in mental health research which include achieving convergence, complementarity, expansion, development and sampling [18, 19, 43]. The use of mixed methods to achieve convergence aims to employ both qualitative and quantitative data to answer the same question, either through triangulation (to confirm the conclusions from each of the methods) or transformation (using qualitative techniques to transform quantitative data). Similarly, complementarity in mixed methods integrates both qualitative and quantitative methods to answer questions for the purpose of evaluation or elaboration [18, 19, 43]. Two papers recommend that qualitative methods are used to provide the depth of understanding, whilst the quantitative methods provide a breadth of understanding [18, 43]. In mental health research, the qualitative data is often used to examine treatment processes, whilst the quantitative methods are used to examine treatment outcomes against quality care key performance targets. Additionally, three papers indicated that expansion as a function of mixed methods uses one type of method to answer questions raised by the other type of method [18, 19, 43]. For instance, qualitative data is used to explain findings from quantitative analysis. Also, some studies highlight that development as a function of mixed methods aims to use one method to answer research questions, and use the findings to inform other methods to answer different research questions. A qualitative method, for example, is used to identify the content of items to be used in a quantitative study. This approach aims to use qualitative methods to create a conceptual framework for generating hypotheses to be tested by using a quantitative method [18, 19, 43]. Three papers suggested that using mixed methods for the purpose of sampling utilize one method (eg. quantitative) to identify a sample of participants to conduct research using other methods (eg. qualitative) [18, 19, 43]. For instance, quantitative data is sequentially utilized to identify potential participants to participate in a qualitative study and the vice versa.
Structure of Mixed Methods in Mental Health Research Five studies categorised the structure of conducting mixed methods in mental health research, into two broader concepts including simultaneous (concurrent) and sequential (see Table 3). In both categories, one method is regarded as primary and the other as secondary, although equal weight can be given to both methods [18, 19, 42, 43, 48]. Two studies suggested
An Integrative Review on Methodological Considerations in Mental ...
133
that the sequential design is a process where the data collection and analysis of one component (eg. quantitative) takes place after the data collection and analysis of the other component (eg qualitative). Herein, the data collection and analysis of one component (e.g. qualitative) may depend on the outcomes of the other component (e.g. quantitative) [43, 48]. An earlier review suggested that the majority of contemporary studies in mental health research use a sequential design, with qualitative methods, more often preceding quantitative methods [18]. Alternatively, the concurrent design collects and analyses data of both components (e.g. quantitative and qualitative) simultaneously and independently. Palinkas, Horwitz [42] recommend that one component is used as secondary to the other component, or that both components are assigned equal priority. Such a mixed methods approach aims to provide a depth of understanding afforded by qualitative methods, with the breadth of understanding offered by the quantitative data to elaborate on the findings of one component or seek convergence through triangulation of the results. Schoonenboom and Johnson [48] recommended the use of capital letters for one component and lower case letters for another component in the same design to indicate that one component is primary and the other is secondary or supplemental.
Process of Mixed Methods in Mental Health Research Five papers highlighted the process for the use of mixed methods in mental health research [18, 19, 42, 43, 48]. The papers suggested three distinct processes or strategies for combining qualitative and quantitative data. These include merging or converging the two data sets, connecting the two datasets by having one build upon the other; and embedding one data set within the other [19, 43]. The process of connecting occurs when the analysis of one dataset leads to the need for the other data set. For instance, in the situation where quantitative results lead to the subsequent collection and analysis of qualitative data [18, 43]. A previous study suggested that most studies in mental health sought to connect the data sets. Similarly, the process of merging the datasets brings together two sets of data during the interpretation, or transforms one type of data into the other type, by combining the data into new variables [18]. The process of embedding data into mixed method designs in mental health uses one dataset to provide a supportive role to the other dataset [43].
Consideration for Using Mixed Methods in Mental Health Re-
134
Advanced Techniques for Collecting Statistical Data
search Three studies highlighted several factors that need to be considered when conducting mixed methods design in mental health research [18, 19, 45]. Accordingly, these factors include developing familiarity with the topic under investigation based on experience, willingness to share information on the topic [19], establishing early collaboration, willingness to negotiate emerging problems, seeking the contribution of team members, and soliciting third-party assistance to resolve any emerging problems [45]. Additionally, Palinkas, Horwitz [18] recommended that mixed methods in the context of mental health research are mostly applied in studies that assess needs of services, examine existing services, developing new or adapting existing services, evaluating services in randomised control trials, and examining service implementation.
Qualitative Study in Mental Health Research This theme describes the various qualitative methods used in mental health research. The theme also addresses methodological considerations for using qualitative methods in mental health research. The key emerging issues are discussed below:
Considering Qualitative Components in Conducting Mental Health Research Six studies recommended the use of qualitative methods in mental health research [19, 26, 28, 32, 36, 44]. Two qualitative research paradigms were identified, including the interpretive and critical approach [32]. The interpretive methodologies predominantly explore the meaning of human experiences and actions, whilst the critical approach emphasises the social and historical origins and contexts of meaning [32]. Two studies suggested that the interpretive qualitative methods used in mental health research are ethnography, phenomenology and narrative approaches [32, 36]. The ethnographic approach describes the everyday meaning of the phenomena within a societal and cultural context, for instance, the way phenomena or experience is contrasted within a community, or by collective members over time [32]. Alternatively, the phenomenological approach explores the claims and concerns of a subject with a speculative development of an interpretative account within their cultural and physical environments focusing on the lived experience [32, 36].
An Integrative Review on Methodological Considerations in Mental ...
135
Moreover, the critical qualitative approaches used in mental health research are predominantly emancipatory (for instance, socio-political traditions) and participatory action-based research. The emancipatory traditions recognise that knowledge is acquired through critical discourse and debate but are not seen as discovered by objective inquiry [32]. Alternatively, the participatory action based approach uses critical perspectives to engage key stakeholders as participants in the design and conduct of the research [32]. Some studies highlighted several reasons why qualitative methods are relevant to mental health research. In particular, qualitative methods are significant as they emphasise naturalistic inquiry and have a discoveryoriented approach [19, 26]. Two studies suggested that qualitative methods are often relevant in the initial stages of research studies to understand specific issues such as behaviour, or symptoms of consumers of mental services [19]. Specifically, Palinkas [19] suggests that qualitative methods help to obtain initial pilot data, or when there is too little previous research or in the absence of a theory, such as provided in exploratory studies, or previously under-researched phenomena. Three studies stressed that qualitative methods can help to better understand socially sensitive issues, such as exploring the solutions to overcome challenges in mental health clinical policies [19, 28, 44]. Consequently, Razafsha, Behforuzi [44] recommended that the natural holistic view of qualitative methods can help to understand the more recovery-oriented policy of mental health, rather than simply the treatment of symptoms. Similarly, the subjective experiences of consumers using qualitative approaches have been found useful to inform clinical policy development [28].
Sampling in Mental Health Research The theme explains the sampling approaches used in mental health research. The section also describes the methodological considerations when sampling participants for mental health research. The sub-themes emerging are explained in the following sections:
Sampling Approaches (Quantitative) Some studies reviewed highlighted the sampling approaches previously used in mental health research [25, 34, 35]. Generally, all quantitative studies tend to use several probability sampling approaches, whilst qualitative
136
Advanced Techniques for Collecting Statistical Data
studies used non-probability techniques. The quantitative mental health studies conducted at community and population level employ multi-stage sampling techniques usually involving systematic sampling, stratified and random sampling [25, 34]. Similarly, quantitative studies that recruit consumers in the hospital setting employ consecutive sampling [35]. Two studies reviewed highlighted that the identification of consumers of mental health services for research is usually conducted by service providers. For instance, Korver, Quee [35] research used a consecutive sampling approach by identifying consumers through clinicians working in regional psychosis departments, or academic centres.
Sampling Approaches (Qualitative) Seven studies suggested that the sampling procedures widely used in mental health research involving qualitative methods are non-probability techniques, which include purposive [19, 28, 32, 42, 46], snowballing [30, 32, 46] and theoretical sampling [31, 32]. The purposive sampling identifies participants that possess relevant characteristics to answer a research question [28]. Purposive sampling can be used in a single case study, or for multiple cases. The purposive sampling used in mental health research is usually extreme, or deviant case sampling, criterion sampling, and maximum variation sampling [19]. Furthermore, it is advised when using purposive sampling in a multistage level study, that it should aim to begin with the broader picture to achieve variation, or dispersion, before moving to the more focused view that considers similarity, or central tendencies [42]. Two studies added that theoretical sampling involved sampling participants, situations and processes based on concepts on theoretical grounds and then using the findings to build theory, such as in a Grounded Theory study [31, 32]. Some studies highlighted that snowball sampling is another strategy widely used in mental health research [30, 32, 46]. This is ascribed to the fact that people with mental illness are perceived as marginalised in research and practically hard-to-reach using conventional sampling [30, 32]. Snowballing sampling involves asking the marginalised participants to recommend individuals who might have direct knowledge relevant to the study [30, 32, 46]. Although this approach is relevant, some studies advise the limited possibility of generalising the sample, because of the likelihood of selection bias [30].
An Integrative Review on Methodological Considerations in Mental ...
137
Sampling Consideration Four studies in this section highlighted some of the sampling considerations in mental health research [30–32, 46]. Generally, mental health research should consider the appropriateness and adequacy of sampling approach by applying attributes such as shared social, or cultural experiences, or shared concern related to the study [32], diversity and variety of participants [31], practical and organisational skills, as well as ethical and sensitivity issues [46]. Robinson [46] further suggested that sampling can be homogenous or heterogeneous depending on the research questions for the study. Achieving homogeneity in sampling should employ a variety of parameters, which include demographic, graphical, physical, psychological, or life history homogeneity [46]. Additionally, applying homogeneity in sampling can be influenced by theoretical and practical factors. Alternatively, some samples are intentionally selected based on heterogeneous factors [46].
Data Collection in Mental Health Research This theme highlights the data collection methods used in mental health research. The theme is explained according to three sub-themes, which include approaches for collecting qualitative data, methodological considerations, as well as preparations for data collection. The sub-themes are as follows:
Approaches for Collecting Qualitative Data The studies reviewed recommended the approaches that are widely applied in collecting data in mental health research. The widely used qualitative data collection approaches in mental health research are focus group discussions (FGDs) [19, 28, 30, 31, 41, 44, 47], extended in-depth interviews [19, 30, 34], participant and non-participant observation [19], Delphi data collection, quasi-statistical techniques [19] and field notes [31, 40]. Seven studies suggest that FGDs are widely used data collection approaches [19, 28, 30, 31, 41, 44, 47] because they are valuable in gathering information on consumers’ perspectives of services, especially regarding satisfaction, unmet/ met service needs and the perceived impact of services [47]. Conversely, Ekblad and Baarnhielm [31] recommended that this approach is relevant to improve clinical understanding of the thoughts, emotions, meanings and attitudes towards mental health services. Such data collection approaches are particularly relevant to consumers of mental health services, due to their low self-confidence and self-esteem
138
Advanced Techniques for Collecting Statistical Data
[41]. The approach can help to understand specific terms, vocabulary, opinions and attitudes of consumers of mental health services, as well as their reasoning about personal distress and healing [31]. Similarly, the reliance on verbal rather than written communication helps to promote the participation of participants with serious and enduring mental health problems [31, 41]. Although FGD has several important outcomes, there are some limitations that need critical consideration. Ekblad and Baarnhielm [31] for example suggest, that marginalised participants may not always feel free to talk about private issues regarding their condition at the group level mostly due to perceived stigma and group confidentiality. Some studies reviewed recommended that attempting to capture comprehensive information and analysing group interactions in mental health research requires the research method to use field notes as a supplementary data source to help validate the FGDs [31, 40, 41]. The use of field notes in addition to FGDs essentially provides greater detail in the accounts of consumers’ subjective experiences. Furthermore, Montgomery and Bailey [40] suggest that field notes require observational sensitivity, and also require having specific content such as descriptive and interpretive data. Three studies in this section suggested that in-depth interviews are used to collect data from consumers of mental health services [19, 30, 34]. This approach is particularly important to explore the behaviour, subjective experiences and psychological processes; opinions, and perceptions of mental health services. de Jong and Van Ommeren [30] recommend that in-depth interviews help to collect data on culturally marked disorders, their personal and interpersonal significance, patient and family explanatory models, individual and family coping styles, symptom symbols and protective mediators. Palinkas [19] also highlights that the structured narrative form of extended interviewing is the type of in-depth interview used in mental health research. This approach provides participants with the opportunity to describe the experience of living with an illness and seeking services that assist them.
Consideration for Data Collection Six studies recommended consideration required in the data collection process [31, 32, 37, 41, 47, 49]. Some studies highlighted that consumers of mental health services might refuse to participate in research due to several factors [37] like the severity of their illness, stigma and discrimination [41]. Subsequently, such issues are recommended to be addressed by building
An Integrative Review on Methodological Considerations in Mental ...
139
confidence and trust between the researcher and consumers [31, 37]. This is a significant prerequisite, as it can sensitise and normalise the research process and aims with the participants prior to discussing their personal mental health issues. Similarly, some studies added that the researcher can gain the confidence of service providers who manage consumers of mental health services [41, 47], seek ethical approval from the relevant committee(s) [41, 47], meet and greet the consumers of mental health services before data collection, and arrange a mutually acceptable venue for the groups and possibly supply transport [41]. Two studies further suggested that the cultural and social differences of the participants need consideration [26, 31]. These factors could influence the perception and interpretation of ethical issues in the research situation. Additionally, two studies recommended the use of standardised assessment instruments for mental health research that involve quantitative data collection [33, 49]. A recent survey suggested that measures to standardise the data collection approach can convert self-completion instruments to interviewer-completion instruments [49]. The interviewer can then read the items of the instruments to respondents and record their responses. The study further suggested the need to collect demographic and behavioural information about the participant(s).
Preparing for Data Collection Eight studies highlighted the procedures involved in preparing for data collection in mental health research [25, 30, 33–35, 39, 41, 49]. These studies suggest that the preparation process involve organising meetings of researchers, colleagues and representatives of the research population. The meeting of researchers generally involves training of interviewers about the overall design, objectives and research questions associated with the study. de Jong and Van Ommeren [30] recommended that preparation for the use of quantitative data encompasses translating and adapting instruments with the aim of achieving content, semantic, concept, criterion and technical equivalence.
Quality assurance procedures in mental health research This section describes the quality assurance procedures used in mental health research. Quality assurance is explained according to three sub-themes: 1) seeking informed consent, 2) the procedure for ensuring quality assurance
140
Advanced Techniques for Collecting Statistical Data
in a quantitative study and 3) the procedure for ensuring quality control in a qualitative study. The sub-themes are explained in the following content.
Seeking Informed Consent The papers analysed for the integrative review suggested that the rights of participants to safeguard their integrity must always be respected, and so each potential subject must be adequately informed of the aims, methods, anticipated benefits and potential hazards of the study and any potential discomforts (see Table Table3).3). Seven studies highlight that potential participants of mental health research must be consented to the study prior to data collection [25, 26, 33, 35, 37, 39, 47]. The consent process helps to assure participants of anonymity and confidentiality and further explain the research procedure to them. Baarnhielm and Ekblad [26] argue that the research should be guided by four basic moral values for medical ethics, autonomy, non-maleficence, beneficence, and justice. In particular, potential consumers of mental health services who may have severe conditions and unable to consent themselves are expected to have their consent signed by a respective family caregiver [37]. Latvala, Vuokila-Oikkonen [37] further suggested that researchers are responsible to agree on the criteria to determine the competency of potential participants in mental health research. The criteria are particularly relevant when potential participants have difficulties in understanding information due to their mental illness.
Procedure for Ensuring Quality Control (Quantitative) Several studies highlighted procedures for ensuring quality control in mental health research (see Table Table3).3). The quality control measures are used to achieve the highest reliability, validity and timeliness. Some studies demonstrate that ensuring quality control should consider factors such as pre-testing tools [25, 49], minimising non-response rates [25, 39] and monitoring of data collection processes [25, 33, 49]. Accordingly, two studies suggested that efforts should be made to reapproach participants who initially refuse to participate in the study. For instance, Liu, Huang [39] recommended that when a consumer of mental health services refuse to participate in a study (due to low self-esteem) when approached for the first time, a different interviewer can re-approach the same participant to see if they are more comfortable to participate after the first invitation. Three studies further recommend that monitoring data quality can be accomplished through “checks across individuals, completion
An Integrative Review on Methodological Considerations in Mental ...
141
status and checks across variables” [25, 33, 49]. For example, Alonso, Angermeyer [25] advocate that various checks are used to verify completion of the interview, and consistency across instruments against the standard procedure.
Procedure for Ensuring Quality Control (Qualitative) Four studies highlighted the procedures for ensuring quality control of qualitative data in mental health research [19, 32, 37, 46]. A further two studies suggested that the quality of qualitative research is governed by the principles of credibility, dependability, transferability, reflexivity, confirmability [19, 32]. Some studies explain that the credibility or trustworthiness of qualitative research in mental health is determined by methodological and interpretive rigour of the phenomenon being investigated [32, 37]. Consequently, Fossey, Harvey [32] propose that the methodological rigour for assessing the credibility of qualitative research are congruence, responsiveness or sensitivity to social context, appropriateness (importance and impact), adequacy and transparency. Similarly, interpretive rigour is classified as authenticity, coherence, reciprocity, typicality and permeability of the researcher’s intentions; including engagement and interpretation [32]. Robinson [46] explained that transparency (openness and honesty) is achieved if the research report explicitly addresses how the sampling, data collection, analysis, and presentation are met. In particular, efforts to address these methodological issues highlight the extent to which the criteria for quality profoundly interacts with standards for ethics. Similarly, responsiveness, or sensitivity, helps to situate or locate the study within a place, a time and a meaningful group [46]. The study should also consider the researcher’s background, location and connection to the study setting, particularly in the recruitment process. This is often described as role conflict or research bias. In the interpretive phenomenon, coherence highlights the ability to select an appropriate sampling procedure that mutually matches the research aims, questions, data collection, analysis, as well as any theoretical concepts or frameworks [32, 46]. Similarly, authenticity explains the appropriate representation of participants’ perspectives in the research process and the interpretation of results. Authenticity is maximised by providing evidence that participants are adequately represented in the interpretive process, or provided an opportunity to give feedback on the researcher’s interpretation [32]. Again, the contribution of the researcher’s perspective
142
Advanced Techniques for Collecting Statistical Data
to the interpretation enhances permeability. Fossey, Harvey [32] further suggest that reflexive reporting, which distinguishes the participants’ voices from that of the researcher in the report, enhances the permeability of the researcher’s role and perspective. One study highlighted the approaches used to ensure validity in qualitative research, which includes saturation, identification of deviant or non-confirmatory cases, member checking and coding by consensus. Saturation involves completeness in the research process, where all relevant data collection, codes and themes required to answer the phenomenon of inquiry are achieved; and no new data emerges [19]. Similarly, member checking is the process whereby participants or others who share similar characteristics review study findings to elaborate on confirming them [19]. The coding by consensus involves a collaborative approach to analysing the data. Ensuring regular meetings among coders to discuss procedures for assigning codes to segments of data and resolve differences in coding procedures, and by comparison of codes assigned on selected transcripts to calculate a percentage agreement or kappa measure of interrater reliability, are commonly applied [19]. Two studies recommend the need to acknowledge the importance of generalisability (transferability). This concept aims to provide sufficient information about the research setting, findings and interpretations for readers to appropriately determine the replicability of the findings from one context, or population to another, otherwise known as reliability in quantitative research [19, 32]. Similarly, the researchers should employ reflexivity as a means of identifying and addressing potential biases in data collection and interpretation. Palinkas [19] suggests that such bias is associated with theoretical orientations; pre-conceived beliefs, assumptions, and demographic characteristics; and familiarity and experience with the methods and phenomenon. Another approach to enhance the rigour of analysis involves peer debriefing and support meetings held among team members which facilitate detailed auditing during data analysis [19].
DISCUSSION The integrative review was conducted to synthesise evidence into recommended methodological considerations when conducting mental health research. The evidence from the review has been discussed according to five major themes: 1) mixed methods study in mental health research; 2) qualitative study in mental health research; 3) sampling in mental
An Integrative Review on Methodological Considerations in Mental ...
143
health research; 4) data collection in mental health research; and 5) quality assurance procedures in mental health research.
Mixed Methods Study in Mental Health Research The evidence suggests that mixed methods approach in mental health are generally categorised according to their function (rationale, objectives or purpose), structure and process [18, 19, 43, 48]. The mixed methods study can be conducted for the purpose of achieving convergence, complementarity, expansion, development and sampling [18, 19, 43]. Researchers conducting mental health studies should understand the underlying functions or purpose of mixed methods. Similarly, mixed methods in mental health studies can be structured simultaneously (concurrent) and sequential [18, 19, 42, 43, 48]. More importantly, the process of combining qualitative and quantitative data can be achieved through merging or converging, connecting and embedding one data set within the other [18, 19, 42, 43, 48]. The evidence further recommends that researchers need to understand the stage of integrating the two sets of data and the rationale for doing so. This can inform researchers regarding the best stage and appropriate ways of combining the two components of data to adequately address the research question(s). The evidence recommended some methodological consideration in the design of mixed methods projects in mental health [18, 19, 45]. These issues include establishing early collaboration, becoming familiar with the topic, sharing information on the topic, negotiating any emerging problems and seeking contributions from team members. The involvement of various expertise could ensure that methodological issues are clearly identified. However, addressing such issues midway, or late through the design can negatively affect the implementation [45]. Any robust discoveries can rarely be accommodated under the existing design. Therefore, the inclusion of various methodological expertise during inception can lead to a more robust mixed-methods design which maximises the contributions of team members. Whilst fundamental and philosophical differences in qualitative and quantitative methods may not be resolved, some workable solutions can be employed, particularly if challenges are viewed as philosophical rather than personal [45]. The cultural issues can be alleviated by understanding the concepts, norms and values of the setting, further to respecting and including perspectives of the various stakeholders.
144
Advanced Techniques for Collecting Statistical Data
Qualitative Study in Mental Health Research The review findings suggest that qualitative methods are relevant when conducting mental health research. The qualitative methods are mostly used where there has been limited previous research and an absence of theoretical perspectives. The approach is also used to gather initial pilot data. More importantly, the qualitative methods are relevant when we want to understand sensitive issues, especially from consumers of mental health services, where the ‘lived experience is paramount [19, 28, 44]. Qualitative methods can help understand the experiences of consumers in the process of treatment, as well as their therapeutic relationship with mental health professionals. The experiences of consumers from qualitative data are particularly important in developing clinical policy [28]. The review findings find two paradigms of qualitative methods are used in mental health research. These paradigms are the interpretive and critical approach [32]. The interpretive qualitative method(s) include phenomenology, ethnography and narrative approaches [32, 36]. Conversely, critical qualitative approaches are participatory action research and emancipatory approach. The review findings suggest that these approaches to qualitative methods need critical considerations, particularly when dealing with consumers of mental health services.
Sampling in Mental Health Research The review findings identified several sampling techniques used in mental health research. Quantitative studies, usually employ probability sampling, whilst qualitative studies use non-probability sampling [25, 34]. The most common sampling techniques for quantitative studies are multi-stage sampling, which involves systematic, stratified, random sampling and consecutive sampling. In contrast, the predominant sampling approaches for qualitative studies are purposive [19, 28, 32, 42, 46], snowballing [30, 32, 46] and theoretical sampling [31, 32]. The sampling of consumers of mental health services requires some important considerations. The sampling should consider the appropriateness and adequacy of the sampling approach, diversity and variety of consumers of services, attributes such as social, or cultural experiences, shared concerns related to the study, practical and organisational skills, as well as ethical and sensitivity issues are all relevant [31, 32, 46]. Sampling consumers of mental health services should also consider the homogeneity and heterogeneity of consumers. However, failure to address these considerations can present
An Integrative Review on Methodological Considerations in Mental ...
145
difficulty in sampling and subsequently result in selection and reporting bias in mental health research.
Data collection in Mental Health Research The evidence recommends several data collection approaches in collecting data in mental health research, including focus group discussion, extended in-depth interviews, observations, field notes, Delphi data collection and quasi-statistical techniques. The focus group discussions appear as an approach widely used to collect data from consumers of mental health services [19, 28, 30, 31, 41, 44, 47]. The focus group discussion appears to be a significant source of obtaining information. This approach promotes the participation of consumers with severe conditions, particularly at the group level interaction. Mental health researchers are encouraged to use this approach to collect data from consumers, in order to promote group level interaction. Additionally, field notes can be used to supplement information and to more deeply analyse the interactions of consumers of mental health services. Field notes are significant when wanting to gather detailed accounts about the subjective experiences of consumers of mental health services [40]. Field notes can help researchers to capture the gestures and opinions of consumers of mental health services which cannot be covered in the audio-tape recording. Particularly, the field note is relevant to complement the richness of information collected through focus group discussion from consumers of mental health services. Furthermore, it was found that in-depth interviews can be used to explore specific mental health issues, particularly culturally marked disorders, their personal and interpersonal significance, patient and family explanatory models, individual and family coping styles, as well as symptom symbols and protective mediators [19, 30, 34]. The in-depth interviews are particularly relevant if the study is interested in the lived experiences of consumers without the contamination of others in a group situation. The indepth interviews are relevant when consumers of mental health services are uncomfortable in disclosing their confidential information in front of others [31]. The lived experience in a phenomenological context preferably allows the consumer the opportunity to express themselves anonymously without any tacit coercion created by a group context. The review findings recommend significant factors requiring consideration when collecting data in mental health research. These considerations include building confidence and trust between the researcher
146
Advanced Techniques for Collecting Statistical Data
and consumers [31, 37], gaining confidence of mental health professionals who manage consumers of mental health services, seeking ethical approval from the relevant committees, meeting consumers of services before data collection as well as arranging a mutually acceptable venue for the groups and providing transport services [41, 47]. The evidence confirms that the identification of consumers of mental health services to participate in research can be facilitated by mental health professionals. Similarly, the cultural and social differences of the consumers of mental health services need consideration when collecting data from them [26, 31]. Moreover, our review advocates that standardised assessment instruments can be used to collect data from consumers of mental health services, particularly in quantitative data. The self-completion instruments for collecting such information can be converted to interviewer-completion instruments [33, 49]. The interviewer can read the questions to consumers of mental health services and record their responses. It is recommended that collecting data from consumers of mental health services requires significant preparation, such as training with co-investigators and representatives from consumers of mental health services [25, 30, 33–35, 39, 49]. The training helps interviewers and other investigators to understand the research project, particularly translating and adapting an instrument for the study setting with the aim to achieve content, semantic, concept, criteria and technical equivalence [30]. The evidence indicates that there is a need to adequately train interviewers when preparing for fieldwork to collect data from consumers of mental health services.
Quality Assurance Procedures in Mental Health Research The evidence provides several approaches that can be employed to ensure quality assurance in mental health research involving quantitative methods. The quality assurance approach encompasses seeking informed consent from consumers of mental health services [26, 37], pre-testing of tools [25, 49], minimising non-response rates and monitoring of the data collection process [25, 33, 49]. The quality assurance process in mental health research primarily aims to achieve the highest reliability, validity and timeliness, to improve the quality of care provided. For instance, the informed consent exposes consumers of mental health services to the aim(s), methods, anticipated benefits and potential hazards and discomforts of participating in the study. Herein, consumers of mental health services who cannot respond to the inform consent process because of the severity of their illness can have it signed by their family caregivers. The implication is that researchers
An Integrative Review on Methodological Considerations in Mental ...
147
should determine which category of consumers of mental health services need family caregivers involved in the consent process [37]. The review findings advises that researchers should use pre-testing to evaluate the data collection procedure on a small scale and then to subsequently make any necessary changes [25]. The pre-testing aims to help the interviewers get acquainted with the procedures and to detect any potential problems [49]. The researchers can discuss the findings of the pre-testing and then further resolve any challenges that may arise prior to the actual field work being commenced. The non-response rates in mental health research can be minimised by re-approaching consumers of mental health services who initially refuse to participate in the study. In addition, quality assurance for qualitative data can be ensured by applying the principles of credibility, dependability, transferability, reflexivity, confirmability [19, 32]. It was found that the credibility of qualitative research in mental health is achieved through methodological and interpretive rigour [32, 37]. The methodological rigour for assessing credibility relates to congruence, responsiveness or sensitivity to a social context, appropriateness, adequacy and transparency. By contrast, ensuring interpretive rigour is achieved through authenticity, coherence, reciprocity, typicality and permeability of researchers’ intentions, engagement and interpretation [32, 46].
Strengths and Limitations The evidence has several strengths and limitations that require interpretation and explanation. Firstly, we employed a systematic approach involving five stages of problem identification, literature search, data evaluation, data synthesis and presentation of results [21]. Similarly, we searched six databases and developed a data extraction form to extract information. The rigorous process employed in this study, for instance, searching databases and data extraction forms, helped to capture comprehensive information on the subject. The integrative review has several limitations largely related to the search words, language limitations, time period and appraisal of methodological quality of included papers. In particular, the differences in key terms and words concerning methodological issues in the context of mental health research across cultures and organisational contexts may possibly have missed some relevant articles pertaining to the study. Similarly, limiting included studies to only English language articles and those published from
148
Advanced Techniques for Collecting Statistical Data
January 2000 to July 2018 could have missed useful articles published in other languages and those published prior to 2000. The review did not assess the methodological quality of included papers using a critical appraisal tool, however, the combination of clearly articulated search methods, consultation with the research librarian, and reviewing articles with methodological experts in mental health research helped to address the limitations.
CONCLUSION The review identified several methodological issues that need critical attention when conducting mental health research. The evidence confirms that studies that addressed methodological considerations in conducting mental health research largely focuses on qualitative studies in a transcultural setting, in addition to lessons from multi-site surveys in mental health research. Specifically, the methodological issues related to the study design, sampling, data collection processes and quality assurance are critical to the research design chosen for any particular study. The review highlighted that researchers conducting mental health research can establish early collaboration, familiarise themselves with the topic, share information on the topic, negotiate to resolve any emerging problems and seek the contribution of clinical (or researcher) team members on the ground. In addition, the recruitment of consumers of mental health services should consider the appropriateness and adequacy of sampling approaches, diversity and variety of consumers of services, their social or cultural experiences, practical and organisational skills, as well as ethical and sensitivity issues. The evidence confirms that in an attempt to effectively recruit and collect data from consumers of mental health services, there is the need to build confidence and trust between the researcher and consumers; and to gain the confidence of mental health service providers. Furthermore, seeking ethical approval from the relevant committee, meeting with consumers of services before data collection, arranging a mutually acceptable venue for the groups, and providing transport services, are all further important considerations. The review findings establish that researchers conducting mental health research should consider several quality assurance issues. Issues such as adequate training prior to data collection, seeking informed consent from consumers of mental health services, pre-testing of tools, minimising non-response rates and monitoring of the data collection process. More specifically, quality assurance for qualitative data can be achieved by applying the principles of credibility, dependability, transferability, reflexivity, confirmability.
An Integrative Review on Methodological Considerations in Mental ...
149
Based on the findings from this review, it is recommended that mental health research should adequately consider the methodological issues regarding study design, sampling, data collection procedures and quality assurance issues to effectively conduct meaningful research.
ACKNOWLEDGEMENTS The authors wish to thank the University of Newcastle Graduate Research and the School of Nursing and Midwifery, for the Doctoral Scholarship offered to the lead author. The authors are also grateful for the support received from Ms. Debbie Booth, the Librarian for supporting the literature search.
AUTHORS’ CONTRIBUTIONS EB, APO’B, and RM conceptualized the study. EB conducted the data extraction, APO’B, and RM, conducted the second review of the extracted data. EB, working closely with APO’B and RM performed the content analysis and drafted the manuscript. EB, APO’B, and RM, reviewed and made inputs into the intellectual content and agreed on its submission for publication. All authors read and approved the final manuscript.
150
Advanced Techniques for Collecting Statistical Data
REFERENCES 1.
National Ethics Advisory Committee. Ethical guidelines for intervention studies: revised edition. Wellington (New Zealand): Ministry of Health. 2012. 2. Mann C. Observational research methods. Research design II: cohort, cross sectional, and case-control studies. Emerg Med J. 2003;20(1):54– 60. doi: 10.1136/emj.20.1.54. 3. DiPietro NA. Methods in epidemiology: observational study designs. Pharmacotherapy: The Journal of Human Pharmacology and Drug Therapy. 2010;30(10):973–984. doi: 10.1592/phco.30.10.973. 4. Hong NQ, Pluyr P, Fabregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed Methods Appraisal Tool (MMAT). Canada.: Intellectual Property Office, Canada; 2018. 5. Creswell JW, Creswell JD. Research design: qualitative, quantitative, and mixed methods approaches: sage publications. 2017. 6. Wisdom J, Creswell JW. Mixed methods: integrating quantitative and qualitative data collection and analysis while studying patient-centered medical home models. Rockville: Agency for Healthcare Research and Quality; 2013. 7. Bonita R, Beaglehole R, Kjellström T. Basic epidemiology: World Health Organization. 2006. 8. Centers for Disease Control Prevention [CDC]. Principles of epidemiology in public health practice: an introduction to applied epidemiology and biostatistics. Atlanta, GA: US Dept. of Health and Human Services, Centers for Disease Control and Prevention (CDC), Office of Workforce and Career Development; 2012. 9. Parab S, Bhalerao S. Study designs. International journal of Ayurveda research. 2010;1(2):128. doi: 10.4103/0974-7788.64406. 10. Yang W, Zilov A, Soewondo P, Bech OM, Sekkal F, Home PD. Observational studies: going beyond the boundaries of randomized controlled trials. Diabetes Res Clin Pract. 2010;88:S3–S9. doi: 10.1016/S0168-8227(10)70002-4. 11. Department of Family Medicine (McGill University). Mixed Methods Appraisal Tool (MMAT) – Version 2011 Canada: McGill University; 2011 [Available from: http://mixedmethodsappraisaltoolpublic. pbworks.com/w/file/fetch/84371689/MMAT%202011%20 criteria%20and%20tutorial%202011-06-29updated2014.08.21.pdf.
An Integrative Review on Methodological Considerations in Mental ...
151
12. Besen Justin, Gan Stephanie D. A Critical Evaluation of Clinical Research Study Designs. Journal of Investigative Dermatology. 2014;134(3):1–4. doi: 10.1038/jid.2013.545. 13. Axelrod DA, Hayward R. Nonrandomized interventional study designs (quasi-experimental designs). Clinical research methods for surgeons: Springer; 2006. p. 63–76. 14. Thiese MS. Observational and interventional study design types; an overview. Biochemia medica: Biochemia medica. 2014;24(2):199– 210. doi: 10.11613/BM.2014.022. 15. Velengtas P, Mohr P, Messner DA. Making informed decisions: assessing the strengths and weaknesses of study designs and analytic methods for comparative effectiveness research. National Pharmaceutical Council 2012. 16. Guerrera F, Renaud S, Tabbò F, Filosso PL. How to design a randomized clinical trial: tips and tricks for conduct a successful study in thoracic disease domain. Journal of thoracic disease. 2017;9(8):2692. doi: 10.21037/jtd.2017.06.147. 17. Bhide A, Shah PS, Acharya G. A simplified guide to randomized controlled trials. Acta Obstet Gynecol Scand. 2018;97(4):380–387. doi: 10.1111/aogs.13309. 18. Palinkas L, Horwitz SM, Chamberlain P, Hurlburt MS, Landsverk J. Mixed-methods designs in mental health services research: a review. Psychiatr Serv. 2011;62(3):255–263. doi: 10.1176/ ps.62.3.pss6203_0255. 19. Palinkas L. Qualitative and mixed methods in mental health services and implementation research. J Clin Child Adolesc Psychol. 2014;43(6):851–861. doi: 10.1080/15374416.2014.910791. 20. World Health Organization [WHO]. Mental health: a state of wellbeing 2014 [Available from: http://www.who.int/features/factfiles/ mental_health/en/. 21. Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–553. doi: 10.1111/j.13652648.2005.03621.x. 22. Hopia H, Latvala E, Liimatainen L. Reviewing the methodology of an integrative review. Scand J Caring Sci. 2016;30(4):662–669. doi: 10.1111/scs.12327.
152
Advanced Techniques for Collecting Statistical Data
23. Pearson A, White H, Bath-Hextall F, Apostolo J, Salmond S, Kirkpatrick P. Methodology for JBI mixed methods systematic reviews. The Joanna Briggs Institute Reviewers Manual. 2014;1:5–34. 24. Moher D, Liberati A, Tetzlaff J, Altman DG, Group P Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. doi: 10.1371/journal. pmed.1000097. 25. Alonso J, Angermeyer MC, Bernert S, Bruffaerts R, Brugha TS, Bryson H, et al. Sampling and methods of the European study of the epidemiology of mental disorders (ESEMeD) project. Acta Psychiatr Scand Suppl. 2004;109(420):8–20. 26. Baarnhielm S, Ekblad S. Qualitative research, culture and ethics: a case discussion. Transcultural Psychiatry. 2002;39(4):469–483. doi: 10.1177/1363461502039004493. [CrossRef] 27. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101. doi: 10.1191/1478088706qp063oa. [CrossRef] 28. Brown C, Lloyd K. Qualitative methods in psychiatric research. Adv Psychiatr Treat. 2001;7(5):350–356. doi: 10.1192/apt.7.5.350. [CrossRef] 29. Davidsen AS. Phenomenological approaches in psychology and health sciences. Qual Res Psychol. 2013;10(3):318–339. doi: 10.1080/14780887.2011.608466. 30. de Jong JT, Van Ommeren M. Toward a culture-informed epidemiology: combining qualitative and quantitative research in transcultural contexts. Transcultural Psychiatry. 2002;39(4):422–433. doi: 10.1177/136346150203900402. [CrossRef] 31. Ekblad S, Baarnhielm S. Focus group interview research in transcultural psychiatry: reflections on research experiences. Transcultural Psychiatry. 2002;39(4):484–500. doi: 10.1177/136346150203900406. [CrossRef] 32. Fossey E, Harvey C, McDermott F, Davidson L. Understanding and evaluating qualitative research. Aust N Z J Psychiatry. 2002;36(6):717– 732. doi: 10.1046/j.1440-1614.2002.01100.x. 33. Jacobi Frank, Wittchen Hans-Ulrich, Hölting Christoph, Sommer Sieghard, Lieb Roselind, Höfler Michael, Pfister Hildegard. Estimating the prevalence of mental and somatic disorders in the community:
An Integrative Review on Methodological Considerations in Mental ...
34.
35.
36.
37.
38.
39.
40.
41.
42.
153
aims and methods of the German National Health Interview and Examination Survey. International Journal of Methods in Psychiatric Research. 2002;11(1):1–18. doi: 10.1002/mpr.118. Koch A, Vogel A, Holzmann M, Pfennig A, Salize HJ, Puschner B, et al. MEMENTA-‘Mental healthcare provision for adults with intellectual disability and a mental disorder’. A cross-sectional epidemiological multisite study assessing prevalence of psychiatric symptomatology, needs for care and quality of healthcare provision for adults with intellectual disability in Germany: a study protocol. BMJ Open. 2014;4(5):e004878. doi: 10.1136/bmjopen-2014-004878. Korver N, Quee PJ, Boos HB, Simons CJ, de Haan L, Investigators G. Genetic risk and outcome of psychosis (GROUP), a multi site longitudinal cohort study focused on gene–environment interaction: objectives, sample characteristics, recruitment and assessment methods. Int J Methods Psychiatr Res. 2012;21(3):205–221. doi: 10.1002/mpr.1352. Larkin M, Watts S, Clifton E. Giving voice and making sense in interpretative phenomenological analysis. Qual Res Psychol. 2006;3(2):102–120. doi: 10.1191/1478088706qp062oa. Latvala E, Vuokila-Oikkonen P, Janhonen S. Videotaped recording as a method of participant observation in psychiatric nursing research. J Adv Nurs. 2000;31(5):1252–1257. doi: 10.1046/j.1365-2648.2000.01383.x. Leese MN, White IR, Schene AH, Koeter MW, Ruggeri M, Gaite L. Reliability in multi-site psychiatric studies. Int J Methods Psychiatr Res. 2001;10(1):29–42. doi: 10.1002/mpr.98. Liu Z, Huang Y, Lv P, Zhang T, Wang H, Li Q, et al. The China mental health survey: II. Design and field procedures. Soc Psychiatry Psychiatr Epidemiol. 2016;51(11):1547–1557. doi: 10.1007/s00127016-1269-5. Montgomery P, Bailey PH. Field notes and theoretical memos in grounded theory. West J Nurs Res. 2007;29(1):65–79. doi: 10.1177/0193945906292557. Owen S. The practical, methodological and ethical dilemmas of conducting focus groups with vulnerable clients. J Adv Nurs. 2001;36(5):652–658. doi: 10.1046/j.1365-2648.2001.02030.x. Palinkas L, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis
154
43.
44.
45.
46.
47.
48.
49.
Advanced Techniques for Collecting Statistical Data
in mixed method implementation research. Adm Policy Ment Health Ment Health Serv Res. 2015;42(5):533–544. doi: 10.1007/s10488-0130528-y. Palinkas L, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Adm Policy Ment Health Ment Health Serv Res. 2011;38(1):44–53. doi: 10.1007/s10488-010-0314-z. Razafsha M, Behforuzi H, Azari H, Zhang Z, Wang KK, Kobeissy FH, et al. Qualitative versus quantitative methods in psychiatric research. Methods Mol Biol. 2012;829:49–62. doi: 10.1007/978-1-61779-4582_3. Robins CS, Ware NC, Dosreis S, Willging CE, Chung JY, LewisFernández R. Dialogues on mixed-methods and mental health services research: anticipating challenges, building solutions. Psychiatr Serv. 2008;59(7):727–731. doi: 10.1176/ps.2008.59.7.727. Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41. doi: 10.1080/14780887.2013.801543. Schilder K, Tomov T, Mladenova M, Mayeya J, Jenkins R, Gulbinat W, et al. The appropriateness and use of focus group methodology across international mental health communities. International Review of Psychiatry. 2004;16(1–2):24–30. doi: 10.1080/09540260310001635078. Schoonenboom J, Johnson RB. How to construct a mixed methods research DesignWie man ein mixed methods-Forschungsdesign konstruiert. KZfSS Kölner Zeitschrift für Soziologie und Sozialpsychologie. 2017;69(2):107–131. doi: 10.1007/s11577-0170454-1. Yin H, Phillips MR, Wardenaar KJ, Xu G, Ormel J, Tian H, et al. The Tianjin mental health survey (TJMHS): study rationale, design and methods. Int J Methods Psychiatr Res. 2017;26(3):09. doi: 10.1002/ mpr.1535.
Chapter
WIKI SURVEYS: OPEN AND QUANTIFIABLE SOCIAL DATA COLLECTION
8
Matthew J. Salganik1 , Karen E. C. Levy2 Department of Sociology, Center for Information Technology Policy, and Office of Population Research, Princeton University, Princeton, NJ, USA
1
2 Information Law Institute and Department of Media, Culture, and Communication, New York University, New York, NY, USA and Data & Society Research Institute, New York, NY, USA
ABSTRACT In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. Advances in technology now enable new, hybrid methods that combine some of the benefits of both approaches. Drawing inspiration from Citation: (APA): Salganik, M. J., & Levy, K. E. (2015). Wiki surveys: Open and quantifiable social data collection. PloS one, 10(5), e0123483. (17 pages) Copyright: ©2015 Salganik, Levy. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/)
156
Advanced Techniques for Collecting Statistical Data
online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia evolves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. We develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-of-concept case studies involving our free and open-source website www.allourideas. org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods.
INTRODUCTION In the social sciences, there is a longstanding tension between data collection methods that facilitate quantification and those that are open to unanticipated information. For example, one can contrast a traditional public opinion survey based on a series of pre-written questions and answers with an interview in which respondents are free to speak in their own words. The tension between these approaches derives, in part, from the strengths of each: open approaches (e.g., interviews) enable us to learn new and unexpected information, while closed approaches (e.g., surveys) tend to be more cost-effective and easier to analyze. Fortunately, advances in technology now enable new, hybrid approaches that combine the benefits of each. Drawing inspiration both from online information aggregation systems like Wikipedia and from traditional survey research, we propose a new class of research instruments called wiki surveys. Just as Wikipedia grows and improves over time based on contributions from participants, we envision an evolving survey driven by contributions from respondents. Although the tension between open and closed approaches to data collection is currently most evident in disagreements between proponents of quantitative and qualitative methods, the trade-off between open and closed survey questions was also particularly contentious in the early days of survey research [1–3]. Although closed survey questions, in which respondents choose from a series of pre-written answer choices, have come to dominate the field, this is not because they have been proven superior for measurement. Rather, the dominance of closed questions is largely based on practical considerations: having a fixed set of responses dramatically simplifies data analysis [4].
Wiki Surveys: Open and Quantifiable Social Data Collection
157
The dominance of closed questions, however, has led to some missed opportunities, as open approaches may provide insights that closed methods cannot [4–8]. For example, in one study, researchers conducted a split-ballot test of an open and closed form of a question about what people value in jobs [5]. When asked in closed form, virtually all respondents provided one of the five researcher-created answer choices. But, when asked in open form, nearly 60% of respondents provided a new answer that fell outside the original five choices. In some situations, these unanticipated answers can be the most valuable, but they are not easily collected with closed questions. Because respondents tend to confine their responses to the choices offered [9], researchers who construct all the possible choices necessarily constrain what can be learned. Projects that depend on crowdsourcing and user-generated content, such as Wikipedia, suggest an alternative approach. What if a survey could be constructed by respondents themselves? Such a survey could produce clear, quantifiable results at a reasonable cost, while minimizing the degree to which researchers must impose their pre-existing knowledge and biases on the data collection process. We see wiki surveys as an initial step toward this possibility. Wiki surveys are intended to serve as a complement to, not a replacement for, traditional closed and open methods. In some settings, traditional methods will be preferable, but in others we expect that wiki surveys may produce new insights. The field of survey research has always evolved in response to new opportunities created by changes in technology and society [10–16], and we see this research as part of that longstanding evolution. In this paper, we develop three general principles that underlie wiki surveys: they should be greedy, collaborative, and adaptive. Building on these principles, we develop methods for data collection and data analysis for one type of wiki survey, a pairwise wiki survey. Using two proof-ofconcept case studies involving our free and open-source website www. allourideas.org, we show that pairwise wiki surveys can yield insights that would be difficult to obtain with other methods. The paper concludes with a discussion of the limitations of this work and possibilities for future research.
WIKI SURVEYS Online information aggregation projects, of which Wikipedia is an exemplar, can inspire new directions in survey research. These projects, which are built from crowdsourced, user-generated content, tend to share certain properties
158
Advanced Techniques for Collecting Statistical Data
that are not characteristic of traditional surveys [17–20]. These properties guide our development of wiki surveys. In particular, we propose that wiki surveys should follow three general principles: they should be greedy, collaborative, and adaptive.
Greediness Traditional surveys attempt to collect a fixed amount of information from each respondent; respondents who want to contribute less than one questionnaire’s worth of information are considered problematic, and respondents who want to contribute more are prohibited from doing so. This contrasts sharply with successful information aggregation projects on the Internet, which collect as much or as little information as each participant is willing to provide. Such a structure typically results in highly unequal levels of contribution: when contributors are plotted in rank order, the distributions tend to show a small number of heavy contributors—the “fat head”—and a large number of light contributors—the “long tail” [21, 22] (Fig 1). For example, the number of edits to Wikipedia per editor roughly follows a power-law distribution with an exponent 2 [22]. If Wikipedia were to allow 10 and only 10 edits per editor—akin to a survey that requires respondents to complete one and only one form—it would exclude about 95% of the edits contributed. As such, traditional surveys potentially leave enormous amounts of information from the “fat head” and “long tail” uncollected. Wiki surveys, then, should be greedy in the sense that they should capture as much or as little information as a respondent is willing to provide.
Figure 1: Schematic of rank order plot of contributions to successful online information aggregation projects.
Wiki Surveys: Open and Quantifiable Social Data Collection
159
These systems can handle both heavy contributors (“the fat head”), shown on the left side of the plot, and light contributors (“the long tail”), shows on the right side of the plot. Traditional survey methods utilize information from neither the “fat head” nor the “long tail” and thus leave huge amounts of information uncollected. https://doi.org/10.1371/journal.pone.0123483.g001
Collaborativeness In traditional surveys, the questions and answer choices are typically written by researchers rather than respondents. In contrast, wiki surveys should be collaborative in that they are open to new information contributed directly by respondents that may not have been anticipated by the researcher, as often happens during an interview. Crucially, unlike a traditional “other” box in a survey, this new information would then be presented to future respondents for evaluation. In this way, a wiki survey bears some resemblance to a focus group in which participants can respond to the contributions of others [23, 24]. Thus, just as a community collaboratively writes and edits Wikipedia, the content of a wiki survey should be partially created by its respondents. This approach to collaborative survey construction resembles some forms of survey pre-testing [25]. However, rather than thinking of pre-testing as a phase distinct from the actual data collection, in wiki surveys the collaboration process continues throughout data collection.
Adaptivity Traditional surveys are static: survey questions, their order, and their possible answers are determined before data collection begins and do not evolve as more is learned about the parameters of interest. This static approach, while easier to implement, does not maximize the amount that can be learned from each respondent. Wiki surveys, therefore, should be adaptive in the sense that the instrument is continually optimized to elicit the most useful information, given what is already known. In other words, while collaborativeness involves being open to new information, adaptivity involves using the information that has already been gathered more efficiently. In the context of wiki surveys, adaptivity is particularly important given that respondents can provide different amounts of information (due to greediness) and that some answer choices are newer than others (due to collaborativeness). Like greediness and collaborativeness, adaptivity increases the complexity
160
Advanced Techniques for Collecting Statistical Data
of data analysis. However, research in related areas [26–33] suggests that gains in efficiency from adaptivity can more than offset the cost of added complexity. Pairwise Wiki Surveys Building on previous work [34–40], we operationalize these three principles into what we call a pairwise wiki survey. A pairwise wiki survey consists of a single question with many possible answer items. Respondents can participate in a pairwise wiki survey in two ways: first, they can make pairwise comparisons between items (i.e., respondents vote between item A and item B), and second, they can add new items that are then presented to future respondents. Pairwise comparison, which has a long history in the social sciences [41], is an ideal question format for wiki surveys because it is amenable to the three criteria described above. Pairwise comparison can be greedy because the instrument can easily present as many (or as few) prompts as each respondent is willing to answer. New items contributed by respondents can easily be integrated into the choice sets of future respondents, enabling the instrument to be collaborative. Finally, pairwise comparison can be adaptive because the pairs to be presented can be selected to maximize learning given previous responses. These properties exist because pairwise comparisons are both granular and modular; that is, the unit of contribution is small and can be readily aggregated [17]. Pairwise comparison also has several practical benefits. First, pairwise comparison makes manipulation, or “gaming,” of results difficult because respondents cannot choose which pairs they will see; instead, this choice is made by the instrument. Thus, when there is a large number of possible items, a respondent would have to respond many times in order to be presented with the item that she wishes to “vote up” (or “vote down”) [42]. Second, pairwise comparison requires respondents to prioritize items—that is, because the respondent must select one of two discrete answer choices from each pair, she is prevented from simply saying that she likes (or dislikes) every option equally strongly. This feature is particularly valuable in policy and planning contexts, in which finite resources make prioritization of ideas necessary. Finally, responding to a series of pairwise comparisons is reasonably enjoyable, a common characteristic of many successful webbased social research projects [43, 44].
Wiki Surveys: Open and Quantifiable Social Data Collection
161
Data Collection In order to collect pairwise wiki survey data, we created the free and opensource website All Our Ideas (www.allourideas.org), which enables anyone to create their own pairwise wiki survey. To date, about 6,000 pairwise wiki surveys have been created that include about 300,000 items and 7 million responses. By providing this service online, we are able to collect a tremendous amount of data about how pairwise wiki surveys work in practice, and our steady stream of users provides a natural testbed for further methodological research. The data collection process in a pairwise wiki survey is illustrated by a project conducted by the New York City Mayor’s Office of Long-Term Planning and Sustainability in order to integrate residents’ ideas into PlaNYC 2030, New York’s citywide sustainability plan. The City has typically held public meetings and small focus groups to obtain feedback from the public. By using a pairwise wiki survey, the Mayor’s Office sought to broaden the dialogue to include input from residents who do not traditionally attend public meetings. To begin the process, the Mayor’s Office generated a list of 25 ideas based on their previous outreach (e.g., “Require all big buildings to make certain energy efficiency upgrades,” “Teach kids about green issues as part of school curriculum”). Using these 25 ideas as “seeds,” the Mayor’s Office created a pairwise wiki survey with the question “Which do you think is a better idea for creating a greener, greater New York City?” Respondents were presented with a pair of ideas (e.g., “Open schoolyards across the city as public playgrounds” and “Increase targeted tree plantings in neighborhoods with high asthma rates”), and asked to choose between them (see Fig 2). After choosing, respondents were immediately presented with another randomly selected pair of ideas. Respondents were able to continue contributing information about their preferences for as long as they wished by either voting or choosing “I can’t decide.” Crucially, at any point, respondents were able to contribute their own ideas, which—pending approval by the wiki survey creator—became part of the pool of ideas to be presented to others. Respondents were also able to view the popularity of the ideas at any time, making the process transparent. However, by decoupling the processes of voting and viewing the results—which occur on distinct screens (see Fig 2)—the site prevents a respondent from having immediate information about the opinions of others when she responds, which minimizes the risk of social influence and information cascades [43, 45–48].
162
Advanced Techniques for Collecting Statistical Data
Figure 2: Response and results interfaces at www.allourideas.org.
This example is from a pairwise wiki survey created by the New York City Mayor’s Office to learn about residents’ ideas about how to make New York “greener and greater.” https://doi.org/10.1371/journal.pone.0123483.g002 The Mayor’s Office launched its pairwise wiki survey in October 2010 in conjunction with a series of community meetings to obtain resident feedback. The effort was publicized at meetings in all five boroughs of the city and via social media. Over about four months, 1,436 respondents contributed 31,893 responses and 464 ideas to the pairwise wiki survey.
Data Analysis Given this data collection process, we analyze data from a pairwise wiki survey in two main steps (Fig 3). First, we use responses to estimate the opinion matrix Θ that includes an estimate of how much each respondent values each item. Next, we summarize the opinion matrix to produce a score for each item that estimates the probability that it will beat a randomly chosen item for a randomly chosen respondent. Because this analysis is modular, either step—estimation or summarization—could be improved independently.
Figure 3: Summary of data analysis plan.
We use responses to estimate the opinion matrix Θ and then we summarize the opinion matrix with the scores of each item. https://doi.org/10.1371/journal.pone.0123483.g003
Wiki Surveys: Open and Quantifiable Social Data Collection
163
Estimating the opinion matrix The analysis begins with a set of pairwise comparison responses that are nested within respondents. For example, Fig 3 shows five hypothetical responses from two respondents. These responses are used to estimate the opinion matrix
which has one row for each respondent and one column for each item, where θj, k is the amount that respondent j values item k (or more generally, the amount that respondent j believes item k answers the question being asked). In the New York City example described above, θj, k could be the amount that a specific respondent values the idea “Open schoolyards across the city as public playgrounds.” Three features of the response data complicate the process of estimating the opinion matrix Θ. First, because the wiki survey is greedy, we have an unequal number of responses from each respondent. Second, because the wiki survey is collaborative, there are some items that can never be presented to some respondents. For example, if respondent j contributed an item, then none of the previous respondents could have seen that item. Collectively, the greediness and the collaborativeness mean that in practice we often have to estimate a respondent’s value for an item that she has never encountered. The third problem is that responses are in the form of pairwise comparisons, which means that we can only observe a respondent’s relative preference between two items, not her absolute feeling about either item. In order to address these three challenges, we propose a statistical model that assumes that respondents’ responses reflect their relative preferences between items (i.e., the Thurstone-Mosteller model [41, 49, 50]) and that the distribution of preferences across respondents for each item follows a normal distribution. Given these assumptions and weakly informative priors, we can perform Bayesian inference to estimate the θj, k’s that are most consistent with the responses that we observe and the assumptions
164
Advanced Techniques for Collecting Statistical Data
that we have made. One important feature of this modeling strategy is that for those who contribute many responses, we can better estimate their row in the opinion matrix, and for those who contribute fewer responses, we have to rely more on the pooling of information from other respondents (i.e., imputation). The specific functional forms that we assume result in the following posterior distribution, which resembles a hierarchical probit model:
(1) where X is an appropriately constructed design matrix, Y is an appropriately constructed outcome vector, μ = μ1…μK represents the mean appeal of each item, and μ0 = μ0[1]…μ0[K] and τ20=τ20[1]…τ20[K]τ02=τ0[1]2…τ0[K]2 are parameters to the priors for mean appeal of each item (μ). This statistical model is just one of many possible approaches to estimating the opinion matrix from the response data, and we hope that future research will develop improved approaches. We fully derive the model, discuss situations in which our modeling assumptions might not hold, and describe the Gibbs sampling approach that we use to make repeated draws from the posterior distribution. Computer code to make these draws was written in R [51] and utilized the following packages: plyr [52], multicore [53], bigmemory [54], truncnorm [55], testthat [56], Matrix [57], and matrixStats [58].
Summarizing opinion matrix Once estimated, the opinion matrix Θ may include hundreds of thousands of parameters —there are often thousands of respondents and hundreds of items—that are measured on a non-intuitive scale. Therefore, the second step of our analysis is to summarize the opinion matrix Θ in order to make it more interpretable. The ideal summary of the opinion matrix will likely vary from setting to setting, but our preferred summary statistic is what we call the score of each item, sˆis^i, which is the estimated chance that it will beat a randomly chosen item for a randomly chosen respondent. That is,
(2)
Wiki Surveys: Open and Quantifiable Social Data Collection
165
The minimum score is 0 for an item that is always expected to lose, and the maximum score is 100 for an item that is always expected to win. For example, a score of 50 for the idea “Open schoolyards across the city as public playgrounds” means that we estimate it is equally likely to win or lose when compared to a randomly selected idea for a randomly selected respondent. To construct 95% posterior intervals around the estimated scores, we use the t posterior draws of the opinion matrix (Θ(1),Θ(2), …, Θ(t)) to calculate t posterior draws of s (sˆ(1),sˆ(2),…,sˆ(t)s^(1),s^(2),…,s^(t)). From these draws, we calculate the 95% posterior intervals around sˆis^i by findings values a and b such that Pr(sˆi>a)=0.025Pr(s^i>a)=0.025 and Pr(sˆi