Cost-Inclusive Evaluation: Planning It, Doing It, Using It 9781462551248, 9781462551255, 1462551246

Is a given treatment, intervention, or program worth it? How can a program do more or better with less? Evaluating the c

117 22 10MB

English Pages 278 [299]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title Page
Title Page
Copyright
Dedication
Foreword
Preface
Contents
I. The Why, the Types, and the Tools of Cost-Inclusive Evaluation
1. Cost‑Inclusive Evaluation
Cost Data: The Lifeblood of Cost-Inclusive Decision Making
Why Consider Costs in Evaluations?
Implicit Inclusion of Costs in All Evaluations
Dangers of Not Making Costs Explicit in Program Evaluations
Current Use of Cost-Inclusive Evaluations by Funding Decision Makers
Why Many Program Evaluators Are Ideally Positioned to Include Costs in Evaluations
Roles for Quantitative, Qualitative, and Mixed Methods
Resistance to Inclusion of Costs in Evaluation
Strategies That Can Be Used to Conduct a Cost-Inclusive Evaluation
Summary
Discussion Questions
Appendix 1.1. Free Electronic Resources on Cost Analysis
Appendix 1.2. Institutions Offering Training in Cost-Benefit Analysis
2. Types of Costs and Outcomes That Need to Be Considered in Cost-Inclusive Evaluations
Fixed Costs versus Variable Costs
Direct Costs versus Indirect Costs
Capital Costs versus Recurrent or Operational Costs
Opportunity Costs
Sunk Costs
Tangible versus Intangible Costs and Outcomes
Monetary versus Nonmonetary Costs and Outcomes
Quantitative versus Qualitative Costs and Outcomes
Other Types of Costs
When Are Differences in Costs and Outcomes Real?
Questions about Cost and Outcome Differences
Distinguishing Effectiveness from Benefits
Summary
Discussion Questions
3. Tools for Identifying and Measuring Costs and Outcomes and Other Issues for Consideration
Challenges with Gathering Cost Data
Double Counting and Its Implications
Costs Identification Tools
Outcomes Identification Tools
Macro-, Meso-, and Micro-Level Program Operation and Evaluation
Why Budgets and Accounting Records Are Often Not Enough
Ethics and Cost-Inclusive Evaluation
Common Traps and Pitfalls
Summary
Discussion Questions
II. Adapting Economic Methods to Enhance Cost-Inclusive Evaluation
4. Economic Appraisal Methodologies
Evolution and Development of Cost Analysis
Traditional Economic Frameworks
Time Preference and Discounting
Net Present Value
Cost-Benefit Analysis
Internal Rate of Return
Payback Period and Discounted Payback Period
Cost-Effectiveness Analysis
Cost-Feasibility Analysis
Cost-Utility Analysis
Return on Investment
Surrogate Market Valuation Methodologies
Advantages and Disadvantages of Different Economic Appraisal Methods
Summary
Discussion Questions
Economic Appraisal Formulas
Appendix 4.1. Executive Order 12291
Appendix 4.2. Present-Value Discount Tables
5. Considerations When Using Economic Appraisal Methods
Which Cost-Analytical Methodology Is Best?
Perspective for the Study
When Different Stakeholders Prefer Different Cost-Analytical Frameworks
When Stakeholders Wish to Exclude Some Costs, Benefits, Effectiveness Measures, or Cost-Analytical Frameworks
Inappropriate Interpretations of Findings from Different Cost-Inclusive Evaluation Frameworks
Discount Rate Choices and Their Impact on Analyses
Market Prices versus Shadow Prices
The Role of Sensitivity Analyses in Gauging Uncertainty
Assumptions Used
Implications for Over- and Underestimation of Costs and Outcomes
Summary
Discussion Questions
III. Adapting Concepts and Tools from Accounting to Improve Cost-Inclusive Evaluation
6. Financial Accounting Concepts and Tools
Understanding Accounting Records to Extract Relevant Data
Income Statement: Importance, Terminology, and Interpretation
Statement of Cash Flows: Importance, Terminology, and Interpretation
Balance Sheet: Importance, Terminology, and Interpretation
Notes to Financial Statements
Ratio Analysis
Cash Budget
Summary
Discussion Questions
7. Cost and Management Accounting Concepts and Tools
How Cost and Management Accounting Can Enhance Decision Making and Cost-Inclusive Evaluation
Understanding Cost Behavior
Relevant Range
Understanding Program Cost or Activity Drivers
Understanding Program Cost Structure
Break-Even Analysis
Cost-Volume-Profit Analysis
Relevant Cost Analysis
Summary
Discussion Questions
Cost and Management Accounting Formulas
IV. Cost‑Inclusive Evaluation for the Scientist–Manager–Practitioner
8. Breaking Down Cost by Activity for Better Cost‑Inclusive Evaluations
Capture the Essence of a Program: Its Activities
Develop a Resource × Activity Matrix to Characterize and Analyze the Program
List and Define the Major Activities of the Program
List and Define the Major Resources Used by the Program
Capture Activity Occurrence, Frequency, and Intensity
Activities Planned versus Activities Implemented
Using Resource × Activity Matrixes to Improve Program Cost-Effectiveness
Summarizing Resources → Activities Findings, Quantitatively and Qualitatively
Assess Reliability and Validity of Resource → Activity Findings
Resource Costing
Valuing Resources Consumed
Dealing with Unmeasured or Unallocated Resources
So, What’s Next?
Summary
Discussion Questions
9. Completing the Model with Activity → Process and Process → Outcome Analyses
Discover the Biopsychosocial Processes That Make a Program Work
Psychological and Other Methods of Measuring Processes
Methods for Finding Biological, Psychological, and Social Processes that Foster Program Outcomes
Complete the Model: Assess Existence, Direction, and Strength of Activity → Process and Process → Outcome Relationships
Mixed Methods RAPOA That Quantifies Resource Use for Changes in Activities, in Processes, and in Outcomes
Resource × Activity Analysis Matrix
Activity × Process Analysis Matrix
Process × Outcome Analysis Matrix
Formative Findings of Cost-Inclusive Mixed Methods Evaluation
Complexities and Individual Variability in Indices of Cost-Effectiveness
Cost-Effectiveness and Cost/Effectiveness (and Effectiveness/Cost) Ratios
Cost-Benefit and Benefit/Cost Ratios
Conclusion: Really Doing Cost-Inclusive Evaluation!
Summary
Statistical Analysis Programs and Associated Websites
Discussion Questions
List of Acronyms
Glossary
References
Author Index
Subject Index
About the Authors
Recommend Papers

Cost-Inclusive Evaluation: Planning It, Doing It, Using It
 9781462551248, 9781462551255, 1462551246

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

COST-INCLUSIVE EVALUATION

Cost-Inclusive Evaluation

Planning It, Doing It, Using It

NADINI PERSAUD | BRIAN T. YATES Foreword by Michael Scriven

THE GUILFORD PRESS New York London

Copyright © 2023 The Guilford Press A Division of Guilford Publications, Inc. 370 Seventh Avenue, Suite 1200, New York, NY 10001 www.guilford.com All rights reserved No part of this book may be reproduced, translated, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission from the publisher. Printed in the United States of America This book is printed on acid-free paper. Last digit is print number: 9 8 7 6 5 4 3 2 1 Library of Congress Cataloging-in-Publication Data Names: Persaud, Nadini, author. | Yates, Brian T., author. Title: Cost-inclusive evaluation : planning it, doing it, using it / Nadini   Persaud, Brian T. Yates ; foreword by Michael Scriven. Description: New York : The Guilford Press, 2023. | Includes   bibliographical references and index. Identifiers: LCCN 2022047758 | ISBN 9781462551248 (paperback) |   ISBN 9781462551255 (hardcover) Subjects: LCSH: Cost effectiveness. | Value analysis (Cost control) |   BISAC: SOCIAL SCIENCE / Methodology Classification: LCC HD47.4 .P45 2023 | DDC 658.15/54—dc23/ eng/20221125 LC record available at https://lccn.loc.gov/2022047758

To our dear friend and colleague Michael Scriven, who has dedicated his life to imparting knowledge and to making significant contributions to the development of the evaluation profession

Foreword

A

s evaluation began to take shape as a semi-independent discipline in the mid-20th century, it became clear that the social science or educational research backgrounds of the early contributors were missing a key component. In the real world of evaluating real programs, the calculation of their costs, especially their opportunity costs, was being done without any professional skill. Because an accurate calculation of costs is often the governing—or at least one of the governing—elements in making practical decisions about whether to cancel or proceed with a project, this shortcoming led to an unacceptable defect of an evaluation in practical evaluation. The first move to close this gap was made by a Stanford economist, who produced a very useful text on evaluation with a reasonable amount of attention to cost analysis. It was successful, but not all that was needed. Some years later, the present volume has been produced, with a quantum jump in the professionalism of cost analysis and without burying it in technical jargon—a double virtue. It is an honor to be asked to introduce this approach. It should also be stressed that this volume does not shed all attention to the economic dimension, and Part IV wraps up the book with two chapters pulling that strand together with the preceding chapters, in which the accounting complexities of the cost analysis are the central focus. There are still some unanswered questions about the logical foundations of cost analysis that need further study—for example, avoiding circularity in the definition of cost as the most valuable forsaken alternative—but in my opinion it is essential for anybody coming into practical evaluation work to read and incorporate the lessons covered here in their practice. In fact, this book creates the possibility of a radical alternative approach to teaching economics or even accounting, especially for those

 vii

viii 

  Foreword

people whose interests center on learning or training managers to supervise planning of future projects and dimensions for development. This would not be the first occasion on which the problems of practical evaluation led to new foundations for older disciplines or their applications. For this reader, it seems clear that the economics of war, pandemics, and leadership are overdue for reconsideration, for each of which the cost analysis needs to be radically redone. Michael Scriven, PhD Professor and Co-Director of the Claremont Evaluation Center Claremont Graduate University

Preface

C

osts of programs are one of the missing links between a superficial evaluation and an evaluation that will get changes made or funding delivered. As we transition into one of the worst global economic recessions in our history, evaluation of program costs and cost-inclusive evaluations take on increasing importance and relevance, as organizations everywhere struggle to deliver services in the midst of budget cuts and declining funding opportunities. Simultaneously, the demand for health and human services of every conceivable nature is increasing due to an escalation in social and fiscal problems brought on by rising global unemployment, mental health problems triggered by concerns about how to survive and be productive in this “new normal,” concerns about finances and how to meet everyday expenditures, rising domestic violence brought on by prolonged and close confinement in multiple lockdowns, and a host of other issues. These unprecedented times necessitate that those in charge of programs—managers and especially administrators—make better use of dwindling resources to do more societal good for more people—to do the best, for the most, for the least. The use of resources always carries an opportunity cost. If we use our resources of time, energy, and funds for one thing, those resources are not available for something else. As such, we need to ensure that resources are used in the best manner to optimize societal good. However, to do this, we must have a good and proper understanding not only of costs but of what drives costs, so we can ensure that we control and manage our precious, increasingly limited resources. This requires knowledge of myriad issues relevant to costs and strategic leveraging of this information to maximize benefits. This requires not just evaluation but cost-inclusive evaluation. Evaluators, administrators, managers, staff of human services, social scientists, government representatives, and agency officials all need to



 ix

x 

  Preface

understand what cost-inclusive evaluation is and what it can and cannot do. These interest groups also need to understand that there are a variety of tools for understanding cost information. These tools empower us to analyze cost data to make requests for funding more meaningful and insightful and to make program operations more efficient and effective. Today, just about every program and agency is asked regularly to show whether something works or not, how much it costs, and whether further (or any) investment is prudent given available alternative uses for those resources consumed by a program or agency. This book, Cost-Inclusive Evaluation, is designed for a varied, international audience of evaluators, students of evaluation, program managers, and evaluation users. In many ways, this is a realization of the scientist– manager–practitioner model. It comes at a fitting time, when understanding and improving the relationship between costs and outputs is more important than ever. This book has been carefully designed with the end user in mind. As such, it makes no assumption that readers have any knowledge of accounting, economics, or financial issues. We have written this book to present a gentle learning curve: We endeavored to make it userfriendly and pragmatic, straightforward and clear. The topics covered were selected to give the reader sufficient understanding of the cost-inclusive evaluation methods that can be used to inform decision making in a way that can revolutionize how decision makers comprehend and use cost data, now and in the future. In particular, we have attempted to avoid the complex economic and statistical jargon, notation, and formulas common in other texts, which can prevent persons without strong mathematical or economic backgrounds from using cost-related evaluation procedures. All our calculations use basic arithmetic, such as addition, subtraction, multiplication, and division. No calculus, no trigonometry needed! We have incorporated call-out boxes, tables, and figures to highlight and simplify the presentation. For ease of reference, boldfaced terms are also included in a Glossary at the back of the book. Thus, we hope that Cost-Inclusive Evaluation will: „ inform readers with all levels of quantitative skills and interests

about current practices, „ show users how they can use and interpret methodologies from eco-

nomic analyses, cost and management accounting, and financial accounting, and „ persuade readers that they themselves can conduct basic and com-

plex evaluations that include costs of programs, as well as the monetary and other universal outcomes resulting from programs.

Preface 

As an educational resource, we trust that this book can serve as both a core text for a course in program evaluation and as a supplement for courses wishing to add more instruction in cost-inclusive evaluation than is provided in the cursory sections or chapters in generalist tomes. Given that our book does not require a background in economics, business, or statistics, it can be used in more disciplines than can most books on costinclusive evaluation. Our biggest hope, however, is that this book can be used by working evaluators, program administrators and managers, service providers, and policymakers who want to update their skill sets to include the basics of evaluations that include costs. More and more do! Rather than handing over evaluation of costs, cost-effectiveness, cost-benefit, cost-utility, and so forth, entirely to others, evaluators using this book can become able to do those analyses themselves or in partnership with economists and accountants. We also encourage program administrators to get familiar with the tools and techniques discussed in this book, especially those related to cost and management accounting. These tools and techniques are easy to grasp and can tremendously help to improve the efficiency of program operations and empower organizations to do more with less. Many organizational staff members and some leaders view evaluation as a task performed by an outside professional. Although external evaluators bring independence and objectivity to the process, internal evaluation often is far more formative—more likely to be helpful rather than purely judgmental. Moreover, if done regularly, internal cost-inclusive evaluation can really enhance decision making. It can help organizations strategically plan for changes in funding and for growth in adverse environments. Costbased data minimize guesswork and provide justification for sound decisions. Cost-inclusive evaluation should therefore not be an activity reserved exclusively for the external evaluator. When practical, organizations should engage in their own cost-inclusive evaluations. At a minimum, program managers should at least be sufficiently familiar with the various techniques discussed in this book so that they can request that the external evaluator perform certain types of cost analyses that can aid decision making. Knowledge is empowerment. We are convinced that by the time you have completed reading this book, you will feel much more confident and empowered with the knowledge gleaned from it. If you are a program administrator, we hope you will be motivated to want to commission a cost-inclusive evaluation or to do some internal cost analyses yourself and to use cost information in all forms and fashions to enhance decision making. If you are an evaluator, we trust you will be inspired to want to conduct a cost-inclusive evaluation. Moreover, regardless of your role in evaluation, we hope you will recognize

 xi

xii 

  Preface

and see how the information presented in this book can help human service programs evolve, secure funding, and flourish. Plus, you will have concrete tools in hand for conducting and using several forms of cost-inclusive evaluation in your program and in your decisions. In closing, we wish to express gratitude to our families for their support and encouragement as we wrote this book and to Michael Scriven, who inspired us to write it. Michael has indicated on numerous occasions that cost analysis is the missing component of professional evaluation. We hope that our book makes a difference in helping to fill this gap. We would also like to take this opportunity to thank The Guilford Press, without whom this endeavor would not have been possible. Special thanks and gratitude to C. Deborah Laughton, Publisher, Methodology and Statistics, for her extraordinary patience and dedication to this project. Life has an interesting way of throwing curveballs at the most unexpected times. However, C. Deborah supported us during those difficult times and offered comfort and understanding, accommodating several revised timelines. Gratitude also to Katherine Sommer, Developmental Editor, who prepared the manuscript for production; Paul Gordon, Art Director, who designed the cover for our book; Anna Nelson, Senior Production Editor; and Oliver Sharpe, Typesetter. Last, but not least, we wish to sincerely thank the anonymous reviewers whose names were revealed to us during the typesetting process so we could thank them publicly. We express gratitude for your invaluable feedback. The role of reviewers is vital to the peer review publication process. It is a demanding job with short timelines and requires much commitment from the brave souls who unselfishly undertake this process. Our heartfelt thanks to Fred Newman, Public Health and Social Work, Florida International University; Patrick Fowler, Psychology, DePaul University; Linda Schrader, Askew School of Public Administration and Policy, Florida State University; the late George Julnes, Public Affairs, University of Baltimore; the late Chris Coryn, Evaluation Center, Western Michigan University; Kimberly A. Fredericks, Dean, School of Management, Sages College; and Ryan Yeung, Urban Policy and Planning, Hunter College. Nadini Persaud Brian T. Yates

Contents

I. The Why, the Types, and the Tools of Cost‑Inclusive Evaluation 1. Cost‑Inclusive Evaluation

3

Cost Data: The Lifeblood of Cost‑Inclusive Decision Making  4 Why Consider Costs in Evaluations?  5 Implicit Inclusion of Costs in All Evaluations  9 Dangers of Not Making Costs Explicit in Program Evaluations  11 Current Use of Cost‑Inclusive Evaluations by Funding Decision Makers 14 Why Many Program Evaluators Are Ideally Positioned to Include Costs in Evaluations  16 Roles for Quantitative, Qualitative, and Mixed Methods  17 Resistance to Inclusion of Costs in Evaluation  18 Strategies That Can Be Used to Conduct a Cost‑Inclusive Evaluation 20 SUMMARY  21

Discussion Questions  23 APPENDIX 1.1.  Free Electronic Resources on Cost Analysis  24 APPENDIX 1.2.  Institutions Offering Training in Cost-­Benefit

Analysis 25

2. Types of Costs and Outcomes That Need to Be Considered in Cost‑Inclusive Evaluations

26

Fixed Costs versus Variable Costs  27 Direct Costs versus Indirect Costs  28 Capital Costs versus Recurrent or Operational Costs  30



 xiii

xiv 

  Contents Opportunity Costs 31 Sunk Costs 32 Tangible versus Intangible Costs and Outcomes  34 Monetary versus Nonmonetary Costs and Outcomes  34 Quantitative versus Qualitative Costs and Outcomes  36 Other Types of Costs  37 When Are Differences in Costs and Outcomes Real?  40 Questions about Cost and Outcome Differences  42 Distinguishing Effectiveness from Benefits  43 SUMMARY  44

Discussion Questions  46

3. Tools for Identifying and Measuring Costs and Outcomes and Other Issues for Consideration

47

Challenges with Gathering Cost Data  48 Double Counting and Its Implications  49 Costs Identification Tools  51 Outcomes Identification Tools  56 Macro‑, Meso‑, and Micro‑Level Program Operation and Evaluation 60 Why Budgets and Accounting Records Are Often Not Enough 62 Ethics and Cost‑Inclusive Evaluation  63 Common Traps and Pitfalls  65 SUMMARY  67

Discussion Questions  68

II. Adapting Economic Methods to Enhance Cost‑Inclusive Evaluation 4. Economic Appraisal Methodologies Evolution and Development of Cost Analysis  71 Traditional Economic Frameworks  72 Time Preference and Discounting  73 Net Present Value  75 Cost‑Benefit Analysis 76 Internal Rate of Return  77 Payback Period and Discounted Payback Period  79 Cost‑Effectiveness Analysis 80 Cost‑Feasibility Analysis 81 Cost‑Utility Analysis 81

71

Contents 

 xv Return on Investment  86 Surrogate Market Valuation Methodologies  86 Advantages and Disadvantages of Different Economic Appraisal Methods 88 SUMMARY  88

Discussion Questions  92 ECONOMIC APPRAISAL FORMULAS  95 APPENDIX 4.1.  Executive Order 12291  97 APPENDIX 4.2.  Present-­Value Discount Tables  98

5. Considerations When Using Economic Appraisal Methods

99

Which Cost‑Analytical Methodology Is Best?  100 Perspective for the Study  102 When Different Stakeholders Prefer Different Cost‑Analytical Frameworks 103 When Stakeholders Wish to Exclude Some Costs, Benefits, Effectiveness Measures, or Cost‑Analytical Frameworks  104 Inappropriate Interpretations of Findings from Different Cost‑Inclusive Evaluation Frameworks  107 Discount Rate Choices and Their Impact on Analyses  109 Market Prices versus Shadow Prices  111 The Role of Sensitivity Analyses in Gauging Uncertainty  112 Assumptions Used 114 Implications for Over‑ and Underestimation of Costs and Outcomes 115 SUMMARY  116

Discussion Questions  118

III. Adapting Concepts and Tools from Accounting to Improve Cost‑Inclusive Evaluation 6. Financial Accounting Concepts and Tools Understanding Accounting Records to Extract Relevant Data 125 Income Statement: Importance, Terminology, and Interpretation 131 Statement of Cash Flows: Importance, Terminology, and Interpretation 132 Balance Sheet: Importance, Terminology, and Interpretation 134 Notes to Financial Statements  140 Ratio Analysis 140

123

xvi 

  Contents Cash Budget 143 SUMMARY  145

Discussion Questions  147

7. Cost and Management Accounting Concepts and Tools

149

How Cost and Management Accounting Can Enhance Decision Making and Cost‑Inclusive Evaluation  150 Understanding Cost Behavior  151 Relevant Range 152 Understanding Program Cost or Activity Drivers  155 Understanding Program Cost Structure  157 Break‑Even Analysis 158 Cost‑Volume‑Profit Analysis 161 Relevant Cost Analysis  165 SUMMARY  172

Discussion Questions  174 COST AND MANAGEMENT ACCOUNTING FORMULAS  176

IV. Cost‑Inclusive Evaluation for the Scientist–Manager–Practitioner 8. Breaking Down Cost by Activity for Better Cost‑Inclusive Evaluations Capture the Essence of a Program: Its Activities  179 Develop a Resource × Activity Matrix to Characterize and Analyze the Program  182 List and Define the Major Activities of the Program  183 List and Define the Major Resources Used by the Program  184 Capture Activity Occurrence, Frequency, and Intensity  185 Activities Planned versus Activities Implemented  186 Using Resource × Activity Matrixes to Improve Program Cost‑Effectiveness 188 Summarizing Resources → Activities Findings, Quantitatively and Qualitatively 190 Assess Reliability and Validity of Resource → Activity Findings 195 Resource Costing 197 Valuing Resources Consumed  200 Dealing with Unmeasured or Unallocated Resources  205 So, What’s Next?  205 SUMMARY  207

Discussion Questions  209

179

Contents 

 xvii

9. Completing the Model with Activity → Process and Process → Outcome Analyses

211

Discover the Biopsychosocial Processes That Make a Program Work 212 Psychological and Other Methods of Measuring Processes  214 Methods for Finding Biological, Psychological, and Social Processes That Foster Program Outcomes  215 Complete the Model: Assess Existence, Direction, and Strength of Activity → Process and Process → Outcome Relationships  218 Mixed Methods RAPOA That Quantifies Resource Use for Changes in Activities, in Processes, and in Outcomes 221 Resource × Activity Analysis Matrix  222 Activity × Process Analysis Matrix  223 Process × Outcome Analysis Matrix  225 Formative Findings of Cost‑Inclusive Mixed Methods Evaluation 227 Complexities and Individual Variability in Indices of Cost‑Effectiveness 227 Cost‑Effectiveness and Cost/Effectiveness (and Effectiveness/Cost) Ratios 229 Cost‑Benefit and Benefit/Cost Ratios  232 Conclusion: Really Doing Cost‑Inclusive Evaluation!  234 SUMMARY  237 Statistical Analysis Programs and Associated Websites  238

Discussion Questions  239



List of Acronyms

241

Glossary

243

References

251

Author Index

263

Subject Index

267

About the Authors

278

PA R T I

The Why, the Types, and the Tools of Cost‑Inclusive Evaluation

C H A P T E R

1

Cost‑Inclusive Evaluation

T

oday more than ever, consideration of program costs is critically important. Severe scarcity of financial resources, budget cuts, and calls for greater accountability and transparency of public, private, and nonprofit expenditures require that decision makers everywhere be much more cost conscious (Persaud, 2021). Evaluators have an important role to play in ensuring that operational costs, in relation to program outputs,1 outcomes, and impacts, provide the best value-for-money. Currently, the use of cost-­inclusive evaluation is relatively limited in program evaluation. This limited use has been caused by a variety of issues, including preconceptions about the expense associated with doing cost-­inclusive evaluation, worries about the complexity of “cost studies,” concerns about possible discovery of financial mismanagement when evaluating costs, and especially evaluators’ self-­assessed competency to conduct such studies (Persaud, 2021). These concerns notwithstanding, more emphasis will likely be placed on cost-­inclusive evaluations in this era, as administrators and evaluators come to terms with what for many is a new normal: doing even more with much less. To do more with less, decision makers need to become familiar with models, terms, and analyses common in financial accounting, cost and management accounting, and common economic appraisal tools and methodologies (Persaud, 2021). Only with these tools will they be able to make and fund decisions that will doubtless come under severe scrutiny. Evaluators also need to be conversant with accounting and economic tools and methods to help decision makers assess the true worth of an evaluand (i.e., what is being evaluated). use output as a synonym for process, outcome as the primary result of program operations, and impact as a more distal and often monetary type of outcome, as detailed in the Glossary entries for these terms.

1 We



 3

4 

  The Why, the T ypes, and the Tools

COST DATA: THE LIFEBLOOD OF COST‑INCLUSIVE DECISION MAKING Organizations today operate in an increasingly complex, tumultuous environment, with oversight by the people being served, as well as by politicians and funders. The days of intuition-­based decision making are gone. Today’s and tomorrow’s decision making is highly dependent on objective evidence—­on data (Persaud, 2021). Data-based decisions clearly have become the preferred way to minimize guesswork, maximize insights, and provide the most defensible decisions possible (Devine, Srinivasan, & Zaman, 2004). Program managers need accurate, relevant, credible, and timely data to verify and understand their operations and to facilitate the intricate mosaic of decision making (Persaud, in press). Data on program costs and on monetary outcomes of programs are essential. Credible evidence also is critical to policy decisions on priorities and strategies (Organization for Economic Co-­operation and Development, 2017). Major organizational decisions (e.g., whether to maintain, grow, or contract operations) rely on cost as well as outcome data as the very lifeblood of decision making. Some organizations are so data driven that decision making can frequently become quite overwhelmed by the sheer volume of information collected. As the world struggles to rebound from the impact of COVID-19 and “Data are the lifeblood other major shocks that have crippled of decision making Building and the raw material its economy, data on costs as well as for accountability. outcomes will take on an increasWithout high-quality ingly important role in decision makdata providing the Blocks ing. Data are critically important not right information on the right things at only to help with the global economic of the right time; recovery efforts but also to ensure that Decision designing, monitoring, policies are put in place to assist the Making and evaluating most vulnerable and marginalized in effective policies becomes almost society to recover from this shock and impossible” to become more resilient in the future. (Independent Expert Reliable, valid information—­in particAdvisory Group, ular, development data—is also impor2014, p. 2). tant for another noteworthy global initiative, namely, the United Nations 2030 Sustainable Development Goals Agenda. Endorsed by 193 countries in 2015, the Agenda requires that development data collection and analysis take center stage in policy at both the country and international levels. Progress on the Agenda is only possible with the right types of data. As aforementioned, cost and outcome data are critical to all types of decisions. However, their importance as they pertain to budgets and fund-

Cost‑Inclusive Evaluation 

 5

ing must be highlighted. Prior to the COVID-19 global pandemic, funding was already drying up and in scarce supply. Additionally, organizations everywhere were already challenged with budget cuts and trying to do the same or more with less and less. Given the current environment, funding is even more constrained and stretched to its limits. Senior administrators will thus have to rely heavily on cost and outcome information to justify funding for new, as well as existing, operations. In this environment, it can no longer be business as usual. Now, more than ever, serious cost analyses must be performed to justify outputs and to show that the best value-formoney options are being pursued. It is therefore important that those tasked with decision making and evaluation utilize data that are meaningful and insightful to formulate informed decisions, as operational Data efficiency is now under the microProvides Quantify, Verify, scope. Self-­sustainability is now a conthe Mechanism to Validate, Understand stant buzzword that must be seriously pondered to ensure organizational Helps Decision Makers survival. In this environment, survival Remove Guesswork Formulate Informed is no longer guaranteed based on the and Subjectivity and Rational Decisions old idiom of providing an important societal good. Administrators must Leads to now demonstrate value-for-money in Greater Operational Proactivity everyday operations (Persaud, 2020) Efficiencies and viability (McKinney, 2004).

WHY CONSIDER COSTS IN EVALUATIONS? The word cost is common in everyday parlance. We hear and use this word all the time. Whether it is the domestic consumer complaining about the rising cost of living, or some service agency drawing attention to the increasing costs associated with providing a particular service (which ultimately will affect us as the consumer of that service, either through higher service fees or perhaps though lower quality of service), “cost” affects each one of us in a very substantial way. Cost has several remarkably different meanings. Common global definitions include: the “amount or equivalent charged for something; the outlay or expenditure (as of effort or sacrifice) made to achieve an object; monetary value of goods and services that producers and consumers purchase; measure of the alternative opportunities forgone in the choice of one good or activity over others” (Encyclopedia Britannica Ready Reference, 2003). In management science and business, cost represents all monetary expenses incurred by a firm in the conduct of business and is related to one primary outcome or motive: profit. In contrast, the discipline of econom-

6 

  The Why, the T ypes, and the Tools

ics often discusses cost using the definition of opportunity costs, 2 that is, “actual resource use in the economy” that “reflect the best alternative uses that the resource could be put to” (New Zealand Treasury, 2005, p. 14), and “the monetary value of all the resources associated with any particular action” whose “value is determined by their worth in the most productive alternative applications” (Levin, 1983, p. 354). Another definition used is willingness to pay, which is also subjective and quite context-­specific. For example, a consumer’s willingness to pay for basic food items necessary for survival in a poor nation will be considerably different and much lower compared with the value placed on the same food items in a wealthier nation. Further, when there is political unrest in a country, basic food items may carry prohibitive prices because of price-­ gouging tactics, as evidenced in Venezuela for many years. For example, in 2020, a basic basket of food items in Venezuela exceeded the monthly minimum wage by a ratio of almost 3.5:1 (University of Melbourne Latin American Herald Tribune, 2020). It should also be noted that those most in need of many critical human services may not be able to pay anything and may not be able to make coherent choices that underlie the willingness-­topay valuation approach. For social programs, costs should generally be considered as the value of all resources required and consumed to achieve some outcome regardless of whether they are purchased, donated, or borrowed. All programs utilize and consume certain resources (e.g., personnel, facilities, utilities, equipment, supplies) daily. However, although it is easy to assign a monetary value to many types of consumed resources, assigning a cost to other types can be quite difficult. For instance, the cost associated with human resources can usually be measured in a relatively straightforward manner using the current market rate (e.g., hourly/daily/weekly/monthly) for the particular type of personnel. In contrast, assigning a monetary value to a life saved is often difficult and controversial. A challenging issue when thinking about costs pertains to the question of costs to whom. For instance, in an evaluation of a social program, should the organization’s personnel time spent with a participant 3 be counted alone, or should the analysis also consider the participant’s time? To complicate matters, suppose that a family member needed to accompany the participant for the procedure. Should this person’s time 2 Leading

evaluation expert Michael Scriven contends that a definition for opportunity costs requires a considerable amount of expertise in order to identify the most valuable alternative, as it may be something that has never before been conceptualized. this book, participant(s) is used to refer to the person(s) receiving or involved with the program services, whereas client is used to refer to the organization commissioning the evaluation.

3 In

Cost‑Inclusive Evaluation 

be counted, given that this individual may have incurred lost wages? How should each of these human resources be measured? Should different valuation methods be used? Would it be appropriate to use a monetary measure for the organization’s personnel and another measure—­ perhaps opportunity cost—for participant time? The determination of the costs to whom dilemma is largely dependent on the interest group perspective adopted, that is, whether a program is being evaluated from an organization’s perspective or from a societal perspective, namely, a social cost-­benefit analysis. By now, it should be obvious that costs are an important consideration that need to be included in any serious evaluation. A cost-­inclusive evaluation approach (in comparison with the traditional evaluation approach) is undoubtedly superior and insightful, as it can answer specific questions aimed at understanding different aspects of a program’s cost. Understanding your program’s costs is fundamental to proactive strategic planning. Trying to keep costs as low as possible, while delivering at a high standard, is an essential requirement in today’s competitive and cash-­constrained environments. In fact, if this issue is not accorded top priority, your program, and your enterprise, may fold. An evaluation that considers the costs of producing a good or delivering a service can enhance and improve your understanding of your operations in several ways and can help you to find effective solutions to problems. It can reveal unexpected costs (e.g., use of obsolete equipment, which requires more time or effort, coupled with higher maintenance costs), identify wastage (e.g., high electricity costs that can be controlled if air conditioning units and lights are turned off when not in use), pilferage (e.g., caused by inadequate controls over access to inventory and storage that is not secured), and so on. It can also lead to more efficient use of resources and can help you to do more with the same amount of time and effort. It can even tell you which intervention levels are most cost-­effective. Reporting guidelines and regulations are also another compelling reason for including costs in program evaluations. Most funding agencies now require that service providers keep proper records that clearly show the major types of resource expenditure for a program, as well as costs incurred for specific program components. Many agencies are also now requiring that cost analyses be conducted to rationalize reimbursements for services provided. Some agencies even set a ceiling on costs, not just for the program but for each of its major “ingredients.” Additionally, funders such as the Wisconsin Department of Children and Families (2012) are now requesting that allowable4 expenses for reimbursements be separated 4 In

the United States, allowable costs in nonprofits are costs that are reimbursable (Office of Management and Budget, n.d.).

 7

8 

  The Why, the T ypes, and the Tools

from unallowable5, 6 expenses and that separate accounts be established to record unallowable costs. In light of the many new compliance regulations from funding agencies, costs are becoming an increasingly important issue in evaluations. In addition to the aforementioned, cost-­inclusive evaluation is a great way to promote transparency and fiscal accountability in a program. Including costs in evaluations and considering monetary outcomes can be a learning opportunity for program administrators, as well as participants and participant advocates, which can facilitate improvement. Evaluations that consider program costs are more likely to be used for funding decisions compared with those that omit program costs. As a decision maker, you will be in a much better position to think about how things can be done differently if your evaluation analyzes the cost ingredients and resources consumed that produced your outputs. Cost-­inclusive evaluation can also help you to set priorities when resources are limited, and it is a powerful way to help convince policy makers, funders, donors, and other interested stakeholder groups to invest and support your program. Doing cost-­ inclusive evaluation is therefore extremely important. However, many program administrators and managers have inherent fears that the sharing of cost data may put their programs at risk. Specifically, they often fear that their programs may not measure up and may be shut down because an evaluation may reveal that their program is not “worth it.” Consequently, they are often reluctant to provide cost data. Although such fears may be warranted, decision makers need to start to think of cost-­inclusive evaluation differently. In today’s environment, budget and resource constraints are a reality, various stakeholder groups are constantly lobbying for transparency, and efficiency and effectiveness are new buzzwords. It is now generally accepted that all programs need to be accountable for resources consumed and the outputs produced. Moreover, program improvement can only take place when there is comprehensive cost information on all aspects of a program. The consideration of program costs is thus now a critical piece of the puzzle that can make the picture complete. Cost-­inclusive evaluation should therefore become our new buzzword in the 21st century. Instead of harboring fears that our programs may close—as decision makers—­let us be proactive and ask questions such as the following: 5 In

the United States, unallowable costs in nonprofits are business expenses incurred that are not eligible for reimbursement (Office of Management and Budget, n.d.).

6 Note

that for financial reporting purposes in the United States, allowable costs need to be separated from unallowable costs. However, in the context of cost-­inclusive evaluation, these costs (such as opportunity costs) may be very relevant and should be included in the analysis.

Cost‑Inclusive Evaluation  „ How can cost-­inclusive evaluation help us to learn and improve so

that our programs can continue? „ How can cost-­inclusive evaluation assist us in making more informed

decisions that promote economy, efficiency, and value-for-money? „ How can cost-­inclusive evaluation help us to use our scarce resources

to maximize public good? In this book, costs are discussed using multiple definitions, as more than one definition may help to portray the value of a program’s worth more accurately. Our overarching goal is to offer readers the opportunity to seriously consider program inputs, describe them, and evaluate them, so that they can clearly understand the process that leads to outputs and, by extension, understand how cost data can enrich decision making. This book also helps readers to understand why it is important for both administrators and evaluators to get familiar with financial accounting, cost and management accounting, and economic appraisal tools and methodologies. Additionally, it highlights why traditional cost-­analysis methodologies that report only a single number may not always accurately portray a program’s true worth or value and why it is important to maintain comprehensive information on all the ingredients consumed—­whether purchased, donated, or borrowed. Our book uses a range of examples to explain different concepts, as it is intended for a diverse audience. We provide insight on different methods of costing and different ways in which cost information can be analyzed. We explain how it is possible for all programs to collect some cost data, even when faced with budget constraints. We make every effort to ensure that the information presented is simple so that our readers are not overwhelmed. After reading our book, we hope that both decision makers and evaluators will feel more comfortable, motivated, and inspired to adopt and conduct cost-­inclusive evaluations.

IMPLICIT INCLUSION OF COSTS IN ALL EVALUATIONS All programs incur certain costs to achieve a given magnitude of desired change. Yet “the number of cost-­inclusive evaluations remains comparatively small” (Herman, Avery, Schemp, & Walsh, 2009, p. 55), and many evaluations fall short by not providing any account of program resources consumed (Yates & Marra, 2017). Serious professional evaluation should ideally examine program costs in relation to the specific activities of the program—­and the processes those activities are designed to instill, modify, or eliminate in participants—­to achieve program outputs.

 9

10 

  The Why, the T ypes, and the Tools

Why do so many program evaluations omit any mention, or at least measurement, of the resources consumed by a program, that is, its costs? Some evaluators would explain that the evaluation cannot afford to include costs or their measurement. Some program managers would agree. Excluding costs from an evaluation essentially says that program costs are unimportant or, if alternative programs are being compared, that costs for all programs are similar enough to be disregarded. These reasons for excluding costs from evaluation are likely to be untrue. More likely, it is possible that the evaluators lack confidence in measuring and evaluating costs. It also is possible that program managers are concerned that if costs are included in an evaluation, the findings may somehow be damaging. Arriving at an evaluative judgment that Program A is more effective compared with Program B or Program C without also considering program costs is not only unsound but highly questionable. In general, programs that are judged to be more effective or better do so at greater expense (i.e., they consume more or use higher quality resources). This is not to suggest that low-cost alternatives are worthless and should be completely ignored. Rather, these options should always be considered, as they may be equally effective (Scriven, 2015). For instance, the Nutraloaf, which is used in many prisons in the United States, contains all the nutritional standards for basic health and is cheap in comparison with other nutritional options. However, many civil rights activists argue that it is inhumane to serve this product because it is quite unpalatable. Ignoring a program’s budget in an evaluation is not fair because key determinants of whether a program worked or not, and for how many people, are ignored. Consider public schools in the United States. Public schools in wealthier areas such as Brookline, Massachusetts, receive considerably more funding (more donations) compared with those in less wealthy school districts such as nearby Roxbury, Boston. Suppose a national evaluation of public schools concluded (without examining costs) that Brookline, Massachusetts, public schools are much more effective compared with public schools in other districts. Policymakers and other interested stakeholder groups may then conclude that this is because Brookline public schools used better teaching methods. Although Brookline public schools may have indeed used a new technique, this may have been possible only because they had a much larger budget to successfully execute teaching techniques, or perhaps because class sizes were considerably smaller. Questions should therefore be raised about the accuracy of the evaluation statement, as well as the methodology used for the evaluation, because an important element (costs) was not evaluated. Including program costs in an evaluation is therefore not only a good idea but also essential. Most programs keep detailed records on outputs.

Cost‑Inclusive Evaluation 

However, less detailed records are maintained on the resources consumed to produce those outputs. Moreover, usually little or no attempt is made to critically analyze the link between inputs and outputs, which is important for a proper understanding of how a program works. Outputs can never be accomplished without inputs. Even if all inputs are donated or free, certain inputs must be consumed to produce outputs. The aforementioned discussion highlights the importance of taking program costs into account in evaluation. However, program costs can only be properly analyzed if the requisite cost data, broken down by ingredients consumed, are maintained by the organization executing the program. Evaluators therefore need to discuss, with the organization or program administrator (hereinafter referred to as the client) commissioning an evaluation, the type of cost study that is realistically doable, considering the cost data that are available, as well as the evaluation budget.

DANGERS OF NOT MAKING COSTS EXPLICIT IN PROGRAM EVALUATIONS A key question to consider in any serious evaluation is whether a program is worth its costs. The question is best answered by an appraisal technique referred to as cost-­benefit analysis, which essentially compares the monetary costs of a program with the monetary benefits of the same program. This technique is related to several other cost-­analytical techniques that similarly assess costs in relation to benefits, as discussed in Chapter 4. Irrespective of how elegant or sophisticated an evaluation is in concluding that a program is of high quality, and regardless of how good it is at establishing outcomes and the causes of those outcomes, the important question at the end of the day revolves around one pertinent issue—cost (Persaud, 2021). Moreover, even when an evaluation concludes that a program is fulfilling the objectives of its raison d’être, this is not an indication that the program is worth the investment being made in it, or even that it is cost-­feasible. Increasingly, cost information is relied on to prevent bad funding decisions and to answer pertinent questions such as those shown in Table 1.1. Regardless of the magnitude of the decision, the consideration of program costs is generally critical to the decision-­making process. Analyzing program costs should therefore be a requisite task of any sound and serious program evaluation—­not an add-on component—­as its exclusion can seriously undermine decision making. Not making program costs explicit in program evaluations can pose a real danger to the usefulness of an evaluation and can result in faulty, suboptimal, and even disastrous decisions (Persaud, 2021). For example, a

 11

12 

  The Why, the T ypes, and the Tools TABLE 1.1.  Types of Decisions Facilitated with Cost Information Funding Decisions • Is this the best use for our financial resources? • Has the funding been used as it was intended? • Can we continue to provide this service if our funding is cut? Investment Decisions • What is the return on investment on this project? • How long will it take to recover the initial investment on this project? • What is the project’s net present value? Learning and Improvement • Are improvements needed? If so, what will they cost? • Which program variations are working and at what cost? • Are our program costs excessively high, reasonable, or cheap? Making Choices • What are the cost implications of shifting from an inpatient to an outpatient setting? • Which of these programs should we invest in? • Given our constrained financial resources, which programs should we continue to support? Program Continuation, Expansion, and Replication • Would it be prudent to add or drop a particular component of this program? • What are the cost implications of scaling up or down? • What are the cost implications of replicating a program in a different environment? Value‑for‑Money Decisions • Is this program delivering value-for-money? • Which of these options will maximize social welfare? • How do actual program costs compare with similar benchmarks?  

proposed program expansion that is not supported with a proper cost study may result in financial expenditures that may not be worthwhile. Similarly, wasteful practices may be allowed to continue indefinitely because no audit is done of individual expenditure categories to determine whether they are reasonable. Given the importance of program costs in relation to informed decision making, Linfield and Posavac (2019) posit that “programs have not been adequately evaluated if costs have not been considered” (p. 238). Quite apart from the aforementioned, there are other worthwhile benefits from cost-­inclusive evaluation. For instance, it can be used to help raise funds for a new program or to secure continued funding for an existing program. Today, the fate of many social intervention efforts is being threatened by severe resource constraints. This, in turn, is influencing the contributions that public and nonprofit organizations can make to the quality of life of the constituents they serve. Organizations now need solid facts to

Cost‑Inclusive Evaluation 

convince funders that their programs are worthy and that the funds will be used in a responsible way. Evaluations supported by cost information can also convey powerful information to funders and donors when program termination is being considered. A case in point was the Job Corps program in 1980, which was allowed to continue because it was shown that the program’s societal benefits greatly outweighed its costs (Mathematica Policy Research, 1982). Currently, budgetary control is a top priority everywhere. Scarcity of funds, budget cuts, public expectations for higher quality goods and services at lower prices, public outcries for greater accountability and transparency, and aggressive media scrutiny, coupled with instant publicity via social media, clearly necessitate that program administrators seriously ponder value-for-money, as inept decision making can quickly turn into a political scandal (Persaud, 2021). Today, service providers need to “demonstrate at least minimum levels of effectiveness for no more than a maximum allowable cost” (Yates, 1999, p. 1). The consideration of program costs is also potentially valuable for assessing and making comparisons among programs that provide comparative services. Given that government financial resources generally are to be invested to maximize the public good, it is important that program costs and effects be analyzed in comparison with critical competitors, that is, viable alternatives that could provide comparative benefits for slightly higher or lower costs (Scriven, 2015). Comparative analyses also can help identify programs that may appear on the surface to be cost-­effective but appear so primarily because program outputs were deliberately set so low that they easily could be achieved. In such cases, programs either need to be redesigned or discontinued so that valued resources are not wasted. Detailed and precise cost information showing anticipated benefits in relation to anticipated costs (i.e., cost-­benefit analysis) also assists in reducing noise and pressure from critics or completely disarming them. Carefully compiled and artfully presented cost data can make it quite difficult for critics to raise questions about resources being wasted, because expenditures can be more directly and quantitatively associated with positive outcomes. Additionally, accountability will also be strengthened by transparency about costs and their relationship to achievements. Thus far, we have emphasized the importance of considering monetary program costs. However, consideration of nonmonetary program costs, such as in-kind, opportunity, and psychological costs, are equally important in many types of programs (Scriven, 2015) and require careful analysis. For example, volunteered time is a common cost that is often taken for granted. Many service programs use volunteered time to run operations on particular days or entirely. Ignoring the cost of volunteered time does not provide an accurate reflection of a program’s true costs. If doctors and

 13

14 

  The Why, the T ypes, and the Tools

nurses are volunteering their time to help to run local clinics, a replication of local pay scales would be informative to show what it would cost to ensure program sustainability if volunteered services were no longer forthcoming. Similarly, a food program for the homeless that is being run primarily by retirees may need to consider the opportunity costs associated with the volunteers’ time, because the volunteers may have an alternative use for their time (e.g., spending time with their families, going to the beach). The opportunity cost associated with this resource may therefore warrant at least a qualitative discussion in the evaluation report. The ability of a program to mobilize volunteers who prefer to spend their time in program activities rather than in leisure pursuits can be viewed as a particularly desirable attribute of the program.

CURRENT USE OF COST‑INCLUSIVE EVALUATIONS BY FUNDING DECISION MAKERS The last two decades have seen increased demand for all types of social services, particularly in human and health services. Simultaneously, the cost of providing these services is becoming progressively higher, while the availability of funding declines, sometimes precipitously. The convergence of these problems causes funders to become increasingly concerned with ensuring that limited funds are spent wisely. Opportunity costs are often monetary: Every dollar expended on a particular program means one less dollar to fund a different program. In many highly developed countries, various stakeholders’ groups (e.g., parliament, state legislatures, foundations, funding agencies) are increasingly demanding information pertaining to both the disbursement of program funds and the benefits that were derived from utilization of those funds. This requirement has been fueled in part by increased media scrutiny, public outcries for greater accountability of public expenditure, and demands for higher quality services. As economies falter and resources dwindle, funders everywhere are becoming more cognizant of the increased need for accountability to avoid public criticism. In light of our new accountability era, decision makers are under constant pressure to act and react (Persaud, 2021). Daily, policymakers face tough questions about the financial costs of government action. Consequently, it is now becoming quite routine for governments to include, as part of their mission, audits and evaluations to examine cost-­effectiveness of government-­funded initiatives. For example, the U.S. Government Accountability Office (GAO) Congressional Protocols (GAO-04-310G) state that the GAO “examines the use of public funds, evaluates federal programs and activities, and provides analyses, options, recommendations, and other

Cost‑Inclusive Evaluation 

 15

assistance to help the Congress make effective oversight, policy, and funding decisions” (U.S. Government Accountability Office, 2004, p. 7). Similarly, the Office of the Auditor General of Canada promotes accountability by conducting independent audits of its many departments, agencies, federal organizations, Crown organizations, and programs so as to hold the government accountable for the results it achieves with taxpayers’ dollars. In additional to financial audits, the Office of the Auditor General of Canada also conducts performance audits of specific programs to determine whether they are being run in an economical and efficient way (Defoy, 2011). The increasing emphasis on accountability has resulted in many governments and funders creating detailed guidelines on cost analysis (see Table 1.2 and Appendix 1.1). Many of the guidelines are quite detailed TABLE 1.2.  Free Electronic Resources for Cost Analysis Government/ Organization

Resource

Pages

Australia

Handbook of Cost-Benefit Analysis (Commonwealth of Australia, 2006)

180

Canada

Canadian Cost-Benefit Analysis Guide: Regulatory Proposals (Treasury Board of Canada Secretariat, 2007)

 51

European Commission

Guide to Cost-Benefit Analysis of Investment Projects: Economic Appraisal Tool for Cohesion Policy 2014–2020 (European Commission, 2014)

364

National Institute Measuring and Improving Cost, Cost-Effectiveness, and Coston Drug Abuse Benefit for Substance Abuse Treatment Programs (Yates, 1999)

136

New Zealand

Guide to Social Cost Benefit Analysis (New Zealand Treasury, 2015)

 78

United Kingdom

The Green Book: Central Government’s Guidance on Appraisal and Evaluation (H.M. Treasury, 2018)

124

United Nations

Standards for Evaluation in the UN System (United Nations Evaluation Group, 2005)

 23

United States

GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs (U.S. Government Accountability Office, 2009)

440

United States

Cost Analysis in Program Evaluation: A Guide for Child Welfare Researchers and Service Providers (U.S. Department of Health and Human Services, Children’s Bureau, 2014)

 22

Note. Links to these resources are provided in Appendix 1.1.

16 

  The Why, the T ypes, and the Tools

and discuss different types of cost analyses. At this point, it is important to point out that the development and legislation of cost-­analysis guidelines that has occurred to date has taken place in developed, rather than developing, countries. Nevertheless, accountability and scarcity of funds is a global phenomenon. In fact, the issue of accountability is perhaps of greater concern in developing countries because these countries receive substantially more international donor assistance. The technological age has also amplified the number of media reports of alleged misuse of funds and projects with political agendas. Consequently, funders are getting stricter, and many are now requiring that potential program recipients justify the likely economic and financial return on projects. In summary, although the use of cost-­inclusive evaluation is on the increase, it is occurring at a relatively slow pace in developed countries and is still practically nonexistent in developing countries.

WHY MANY PROGRAM EVALUATORS ARE IDEALLY POSITIONED TO INCLUDE COSTS IN EVALUATIONS The past several decades have witnessed some progress in the promotion and use of cost studies, as evidenced by the guidelines provided in Table 1.2, among others. Evaluators, therefore, have a wide array of resources at their disposal. Notwithstanding this fact, the number of cost-­inclusive evaluations remains relatively limited (Christie & Fleischer, 2010), and many evaluators continue to focus exclusively on program effectiveness without attempting to measure and analyze program costs, which are instrumental to a program’s success. This may perhaps be because many evaluators are somewhat intimidated by cost analysis. If you are one of these persons, take comfort in knowing that cost analysis is simply another quantitative and qualitative methodology (Scriven, 2015). Note, also, that many types of statistical analyses are also used in cost analysis. This means that you may already have some of the skill sets needed for cost analysis. Moreover, in the same way that some outcomes are best discussed in a qualitative manner, it may also be appropriate to discuss some program costs (resources consumed) using qualitative description. Note, also, that all programs have a budget and should maintain records of all expenditures. However, although expenditure information may be easy to access in some cases, it can be exceedingly difficult in other cases. Experience shows that many program administrators are quite resistant to disclosing actual expenditures to evaluators. It can even be challenging to get access to a program’s budgets. It should also be noted that the

Cost‑Inclusive Evaluation 

 17

cost information in budgets and actual expenditures can vary considerably. Notwithstanding these challenges, evaluators should try to reason with clients to get cost data. Admittedly, this may come with a price for data collection. However, if you strategically plan a cost-­inclusive evaluation from the outset, you can collect both cost and outcomes data simultaneously and control the costs of data collection. Try to always use an electronic spreadsheet to capture your data. Invest time and effort to create a simple template to record cost information. You can then use this template for future program evaluations as is or with some slight modifications. More than likely, many evaluators will already be familiar with spreadsheet software (another skill set needed in cost analysis). If you are not, a YouTube video will provide you with the basics in a few minutes. At a minimum, all evaluators should at least be able to conduct some basic cost analyses, as it does not require great mathematical or accounting skills. For example, it is quite simple to calculate a basic measure of cost-­ effectiveness, such as cost per alcohol-­free day in an alcohol prevention program. Only two pieces of information are required—­the total cost of the program (e.g., $15,000) and the number of alcohol-­free days for all patients served by the program (e.g., 150 days). The cost per alcohol-­free day then becomes $15,000/150 days = $100. If the total number of alcohol-­free days for the program is not known but the average number of alcohol-­free days per participant has been measured, then that average can of course be multiplied by the number of participants in the program to arrive at an estimate of the total. Similar to other types of data collection, the quality of cost data obtained dictates the types of cost analyses that can be performed. Keep in mind that if you are new to cost analysis, it may be wise to keep your analyses simple. In time, you will be able to progress to more complicated types of cost analyses.

ROLES FOR QUANTITATIVE, QUALITATIVE, AND MIXED METHODS Many evaluators and evaluations call for a mixture of quantitative and qualitative evaluation. The distinction between these two forms of evaluation is more complex than saying that quantitative evaluation uses numbers and qualitative evaluation uses words. What is clear is that mixed methods evaluation (e.g., Mertens, 2018) is often called for in solicitations for grant and contract proposals. As noted, and illustrated by Rogers, Stevens, and Boymal (2009), cost-­inclusive evaluations can and should include subjective judgments as well as objective, numeric measures of resources consumed,

18 

  The Why, the T ypes, and the Tools

activities enacted, processes inspired, and outcomes achieved. More valid and more useful cost-­inclusive evaluations are made possible using objective measures to validate and, when necessary, correct subjective judgments by using more objective judgments and insights to avoid measuring precisely what is least important and by involving a variety of stakeholders to ensure that important elements are included in both qualitative and quantitative models of the service program. For example, Rogers et al. (2009) conducted a qualitative evaluation that compared costs with benefits for a Stronger Families and Communities program in Australia. This evaluation enhanced its validity by asking a variety of participants to describe the resources expended, positive outcomes attained, and negative outcomes avoided. They recognized the limitations of this methodology, though, with the absence of a clear indicator of the causal nature and size of the cost-­benefit relationship. Adding quantitative estimates gleaned from quick, subjective judgments of funder, provider, and participant stakeholders could augment this sort of cost-­inclusive evaluation. Validating it with careful (but costly) sophisticated quantitative research would further the potential usefulness of this evaluation for a variety of decision makers. Emphasizing the qualitative components of cost-­ inclusive evaluations also promises to help control the costs of cost-­inclusive evaluations, which otherwise can be considerable increments over those of outcomeand process-­focused evaluations. Peterson (2020) demonstrated this in a recent adaption of the value-for-money method developed by King (2019a, 2019b), in which a cost-­inclusive evaluation of 2.2 billion pounds’ worth of numerous programs funded by the Global Challenges Research Fund was conducted by groups of experts assembled by the U.K. Department for Business, Energy, and Industrial Strategy. A rubric developed collab­ oratively in peer review produced meaningful, improvement-­oriented findings.

RESISTANCE TO INCLUSION OF COSTS IN EVALUATION As previously noted, the consideration of program costs should be a critical component of sound decision making. Surprisingly, however, there remains considerable resistance to cost-­inclusive evaluations on the part of clients, as well as some evaluators. This may be due to several issues that, fortunately, are quite solvable (see Table 1.3). Evaluation of many sorts often encounter resistance, especially when some participants feel threatened by potential findings. When questions of cost and worth are added to the more common questions of whether programs are implemented with reasonable

Cost‑Inclusive Evaluation  TABLE 1.3.  Possible Reasons for Resistance to Consideration of Program Costs by Clients and Evaluators and Potential Solutions Insufficient Promotion of Cost‑Inclusive Evaluation in the Literature • Discuss cost-inclusive evaluation in more evaluation textbooks and leading evaluation journals to promote its importance and garner buy-in for its use. • Promote cost-inclusive evaluation at evaluation conferences. Evaluators’ Concerns about Competency to Conduct Cost Studies • Make cost analysis a requisite skill set for degree programs in evaluation (see Appendix 1.2 for institutions offering training in cost analysis). • Evaluators can engage in self-learning. Perceptions That Cost Studies Are Costly, Difficult, Time‑Consuming • Work with client from the outset to reduce fears that cost-inclusive evaluation is impossible. • Focus on simple, rather than complicated, types of analyses. • Negotiate with client for an increased budget and timeline. • Present different cost-analysis options to client, along with the associated budget involved for each type of analysis. Perceptions That Costs Are Trivial • Show your client how cost analysis can aid and enhance decision making. • Explain how cost analyses can lead to operational efficiencies. • Help your client understand how cost analysis can promote accountability and transparency and reduce criticisms. Fears That Funding May Be Reduced or Terminated If Cost Studies Are Included • Show your client how cost savings can serve more participants and thus justify program continuation. • Help your client to understand that having no cost data may be a disadvantage, rather than an advantage. Fears That Cost Studies Will Showcase Management in a Negative Light • Help your client to understand that the purpose of a cost study is not to lay blame. Rather, a cost-inclusive evaluation can significantly aid learning and improvement. • Reiterate that funding is drying up, funders are getting more selective, and a cost study can help to ensure the survival of your program. Concerns with Measurement and Valuation Problems • If monetary quantification is too difficult, time-consuming, or expensive, use a methodology that does not require monetary quantification of benefits. • Use a mixed method design that includes quantitative and qualitative analyses. • Use sensitivity analyses when estimates are uncertain. • Help your client set up a system that can collect proper cost data. • Document all assumptions used for the cost study.  

 19

20

THE WHY, THE T YPES, AND THE TOOLS

fidelity and of whether predicted outcomes are being achieved, resistance can escalate dramatically. Yet asking questions about costs and return on investment seems both natural and justified to many stakeholders, from clients who devote their time to program activities to taxpayers and donors who contribute financial resources.

STRATEGIES THAT CAN BE USED TO CONDUCT A COST-INCLUSIVE EVALUATION Globally, organizations everywhere are competing for limited funds, and budget cuts are now a common reality (Persaud, 2021). As such, program evaluations will likely be accorded low priority on many organizational agendas. Yet the current environment may actually provide an ideal platform to engage potential clients and discuss why a cost-inclusive evaluation is relevant and why it is so important, especially in this environment. In these trying times, administrators Partnership need to ensure that they are capitalizing on operational efficiencies, because basic survival is at stake. Organizations must demonstrate value-for-money, and costCOST-INCLUSIVE EVALUATION inclusive evaluations can help to determine how this can be achieved. Let us work Evaluators, for their part, also need a livelihood. They together to make cannot work for gratis, as they, too, have bills. So they this a reality must work with clients, even those that have a shoestring budget. Evaluators also need to recognize that they operate in an extremely cost- competitive environment, that their clients will always have limited financial resources, and that someone else will always be available to do the job. Evaluators and clients, therefore, need to engage in conversations about how a cost-inclusive evaluation can become a reality. Finding creative strategies that will permit collection of valid, timely, and relatively inexpensive data, without compromising standards and rigor, is quite important in today’s environment. Several creative options can be considered when confronted with budget challenges that can help in making a cost-inclusive evaluation a reality (see Table 1.4). Note, however, that if absolute independence is important, some strategies may not be appropriate, as they can compromise the integrity of the evaluation process. In other instances, it may be possible to use a combination of strategies. However, the pros and cons of any strategy should be properly analyzed and discussed with your client prior to adoption, so that your client is aware of the scope of the evaluation and exactly what can be done with the available budget.

Cost‑Inclusive Evaluation 

 21

TABLE 1.4.  Possible Strategies That Can Be Used to Conduct a Cost-Inclusive Evaluation Collaborate and Negotiate with Your Client When Setting the Evaluation Budget. Propose different options and explain the benefit and cost of each option. Help Your Client to Understand Why Cost‑Inclusive Evaluation Should Be an Integral Part of a Program Budget. This will help to ensure that adequate resources are set aside for conducting cost-inclusive evaluations in the future. Reduce Evaluation Scope and Simplify Evaluation Design. Collect only essential data. Note that simple cost analyses will be better than no cost analyses. Use a Less Expensive Cost‑Analytical Methodology. Certain types of cost analyses can be very technical and expensive. Substitute with a methodology that is less expensive, such as cost-effectiveness analysis. Use Less Precision and Rigor. “It is better to be roughly right than precisely ignorant” (Newcomer, Hatry, & Wholey, 1994, p. 1). Use Qualitative Analyses to Supplement Quantitative Analyses. Some benefits are difficult to quantify into monetary terms. Supplement with a qualitative discussion. Use Alternative or Cheaper Labor Sources for Data Collection. See if program staff can collect some data or use graduate students and provide mentorship and a small stipend (Persaud & Rudy, 2008). Use Other Creative Strategies to Reduce Costs. See whether your client can provide office space and printing and teleconferencing and telephone facilities. Utilize teleconferencing for interviews to reduce travel costs. Reduce sample size and survey length to reduce data collection costs and analysis. See whether your client can provide local transportation for any data collection that is required.  

SUMMARY Dwindling financial resources, calls for greater accountability and transparency, and the desire to do as much social good with limited financial resources as possible necessitate that both decision makers and program evaluators embrace cost-­inclusive evaluation and accounting tools and techniques that can help to enhance and improve organizational strategic decision making. This is, however, easier said than done, as many evaluators do not possess the requisite skills and competencies needed to conduct cost-­ inclusive evaluation. Those that do often face resistance from programs and

22 

  The Why, the T ypes, and the Tools

organizations that are quite hesitant to share cost data. This chapter seeks to dispel the fears associated with cost-­inclusive evaluation and sharing of cost information. It explains why cost data are the lifeblood of decision making, why costs should and can be easily considered in evaluations, the dangers of not making costs explicit in program evaluations, and more. As the world grapples with economic recovery from the devastation caused by the COVID-19 global health pandemic and international disorder, optimizing financial resources and spending wisely must be priorities in all organizations. With dwindling financial resources everywhere, budget cuts in practically every organization, greater calls for accountability and transparency, and funders implementing more rigorous criteria to evaluate funding requests, organizations and evaluators need to capitalize on the tools and techniques that can help organizations to survive and prosper in an increasingly turbulent world. Our book outlines, in the remaining eight chapters, traditional and new cost-­analytical tools that can be adopted to perform internal and external cost-­inclusive evaluations. The content covered is designed to help readers to understand how to monetize costs and outcomes and speaks to the many issues that need to be considered when conducting cost-­inclusive evaluations. Our book is structured as follows. „ Chapter 2 discusses the different types of costs and outcomes that

may need to be considered in cost-­inclusive evaluations. „ Chapter 3 shares several tools that can be used for identifying and

measuring costs and outcomes. „ Chapter 4 shows how traditional economic appraisal methods can

be simplified and performed without complex statistical notation. „ Chapter 5 explores the various issues that require consideration

when using economic appraisal methods. „ Chapter 6 expands the evaluator’s repertoire and toolkit by intro-

ducing readers to financial accounting concepts and tools. „ Chapter 7 adds to the already expanded toolkit additional cost-­

analytical tools and demonstrates how cost and management accounting concepts can enhance evaluations and decision making. „ Chapters 8 and 9 wrap up the book by illustrating that cost-­

inclusive evaluation can be used by a varied audience—­the scientist–­ manager–practitioner. These chapters explain how to monetize costs and outcomes through resource → activity evaluation and activity → process and process → outcome analyses.

Cost-Inclusive Evaluation

23

DISCUSSION QUESTIONS

(1) Identify three benefits of cost‑inclusive evaluation. (2) List and briefly describe two ways that cost‑inclusive evaluation can help an organization with strategic decision making. (3) Organizations are often reluctant to share cost data. How can evaluators help those commissioning evaluations to feel more comfortable with sharing cost data? (4) What are common forms of resistance to evaluations that focus on processes or activities within programs, or on outcomes produced by programs? For each of those forms of resistance, replace process or outcome with the word cost and see if the question still makes sense. Often this can illustrate prob‑ lems with, and can even mitigate, resistance to inclusion of costs in evalua‑ tions. For example, in early evaluations of different behavioral health programs, resistance came in the form of “How can you ask me such questions when I’m only, and desperately, trying to help?” Today, evaluation in general seems quite justified by many. But consider too the statement “Why evaluate costs? They can differ so much from region to region that they cannot be rigorously measured and generalized.” This may seem a reasonable question. But sub‑ stitute outcome or costs in the first question posed above. Many fewer would consider that variability in outcomes would provide a rationale for dismissing an evaluation effort.

24 

  The Why, the T ypes, and the Tools

APPENDIX 1.1.  Free Electronic Resources on Cost Analysis Government/Organization

Resource

Web Link

Commonwealth of Australia: Handbook of Cost-Benefit Analysis Department of Finance and Administration

www.fao.org/ag/ humannutrition/33237-0b38a7524 7f8e69f48a24d7ec850693b2.pdf

United States: Brian Yates for National Institute on Drug Abuse

Measuring and Improving Cost, Cost-Effectiveness, and CostBenefit for Substance Abuse Treatment Programs

https://archives.drugabuse.gov/ sites/default/files/costs.pdf

Canada: Treasury Board of Canada Secretariat

Canadian Cost-Benefit Analysis Guide: Regulatory Proposals

www.tbs-sct.gc.ca/rtrap-parfa/ analys/analys-eng.pdf

Europe: European Commission

Guide to Cost-Benefit Analysis of Investment Projects: Economic Appraisal Tool for Cohesion Policy 2014–2020

https://iwlearn.net/ resolveuid/719db343-4025-45dc9e6f-541a2d43e482

New Zealand: New Zealand Treasury

Guide to Social Cost Benefit Analysis

https://treasury.govt.nz/sites/ default/files/2015-07/cba-guidejul15.pdf

United Kingdom: H.M. Treasury

The Green Book: Central Government’s Guidance on Appraisal and Evaluation

https://assets.publishing.service. gov.uk/government/uploads/ system/uploads/attachment_data/ file/685903/The_Green_Book.pdf

United Nations

Standards for Evaluation in the UN System

https://unsdg.un.org/sites/ default/files/UNEG-Standardsfor-Evaluation-in-the-UN-SystemENGL.pdf

United States: U.S. Department of Health and Human Services, Children’s Bureau

Cost Analysis in Program Evaluation: A Guide for Child Welfare Researchers and Service Providers

www.acf.hhs.gov/sites/default/ files/documents/cb/cost_analysis_ guide.pdf

United States: Government Accountability Office

GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs

www.gao.gov/assets/710/705312. pdf



Cost‑Inclusive Evaluation 

 25

APPENDIX 1.2.  Institutions Offering Training in Cost-­B enefit Analysis Institution

Course

Web Link

Griffith University (Queensland, Australia)

Cost-benefit analysis https://app.secure.griffith.edu.au/course-profileapplications search/?course_code=3311AFE&semester=3205&s ubmit=Find+profiles%2Foutlines

New York University (Wagner)

Cost-benefit analysis https://wagner.nyu.edu/education/courses/costbenefit-analysis

Online Academy

Cost-benefit analysis www.onlineacademies.co.uk/management-businessand-accounting/cost-benefit-analysis

The University of Chicago Cost-benefit analysis https://harris.uchicago.edu/academics/programs(Harris Public Policy) degrees/courses/cost-benefit-analysis-2  

C H A P T E R

2

Types of Costs and Outcomes That Need to Be Considered in Cost‑Inclusive Evaluations

C

ost-­inclusive evaluations can span a continuum from highly sophisticated and technical to relatively simple and straightforward. The level of sophistication that is desired and used is often influenced by many issues, including the needs of decision makers, the type and quality of costs and outcomes1 data that are available, and the evaluation budget and time frame for the cost-­inclusive evaluation. Regardless of the level of sophistication, most cost studies require basic costs and outcomes data. However, the way costs data are classified can vary greatly depending on whether a professional accountant is recording the data or whether a nonaccountant (or perhaps a volunteer not versed in accounting) records the data. Moreover, when outcomes data are not in monetary units, then measurement may be quite difficult, which can again affect the sophistication of the types of analyses performed. This chapter discusses the myriad classification systems that can be used for costs and outcomes. Understanding cost categories can help evaluators and program staff to understand the resources needed to support a program. Likewise, understanding that many outcomes are intangible in nature or may be difficult to convert into monetary units can also help with understanding the true value a program provides. Although traditional economic appraisal methodologies (e.g., cost-­ benefit analysis, net present value) are essentially concerned with total monetary costs versus total monetary benefits (see Chapter 4), a proper 1 “Outcomes

can be monetary, i.e., benefits, or nonmonetary, i.e., effectiveness” (Yates, 2009, p. 54). For more discussion on this, see the section “Distinguishing Effectiveness from Benefits,” later in this chapter.

26 

Types of Costs and Outcomes 

 27

understanding of classification systems is very important to prevent duplication and omission of either costs or outcomes. A proper understanding of classification systems is also important when other types of cost-­inclusive evaluation tools are used (Persaud, 2021), such as those from cost and management accounting, in which the distinction between fixed costs and variable costs is central to many methodologies (see Chapter 7). Finally, in many cost-­inclusive evaluations, it may be appropriate or desirable to use a mixed methods approach and analyze both quantitative and qualitative costs and outcomes, as this might be quite enlightening for decision making, especially for program sustainability decisions and when expansion or replication is being contemplated.

FIXED COSTS VERSUS VARIABLE COSTS

VARIABLE COSTS

FIXED COSTS

Fixed and variable costs are critical PER UNIT costs borne in all programs (Linfield & (Relevant Range) Posavac, 2019). These are the costs that varies becoming smaller are incurred to produce outcomes (see with greater activity Figure 2.1). Fixed costs stay constant in total within the relevant range, that is, the range within which current operations IN TOTAL (Relevant Range) can be carried out without incurring remains the same additional costs in the short term. In regardless of activity the relevant range, assumptions pertaining to fixed costs and variable costs behavior are valid regardless of program activity or volume (Persaud, PER UNIT 2020, 2021). For example, rent will not (Relevant Range) increase if a program operates at miniremains the same mum or full capacity, seeing 1 or 100 regardless of activity participants daily. Beyond rent, typical fixed costs include routine advertising, property taxes, insurance, and adminIN TOTAL istrator and supervisor salaries. When (Relevant Range) considered on a per-unit basis, such as varies becoming larger cost per participant, fixed costs vary with greater activity inversely with activity levels (Persaud, 2020, 2021), with unitized fixed costs decreasing when volume increases and  FIGURE 2.1  Cost behavior within the increasing when volume decreases relevant range.

28 

  The Why, the T ypes, and the Tools

(Levin, McEwan, Belfield, Bowden, & Shand, 2018; McKinney, 2004). For instance, if rent is $5,000 per month and 500 participants are seen, rent per participant is $5,000/500 = $10. If only 50 participants are seen instead, rent per participant increases to $5,000/50 = $100 per participant. In comparison, variable costs vary in total in direct proportion to the level of operational activity (Levin et al., 2018; Linfield & Posavac, 2019; McKinney, 2004; Persaud, 2020, 2021). Operational activity can be expressed in many ways, for instance, such as program participants, patients served, students enrolled, hospital beds occupied, units produced, or miles driven. To contextualize, when a factory produces more output, it will require more labor, materials, and electricity. Similarly, the number of syringes used in a clinic will vary with the number of participants served, and the number of school meals will increase with the number of children attending school. No variable costs are incurred if operational activity is zero. In contrast to fixed costs, unitized variable costs remain the same, that is, the cost for a syringe will not usually change whether 1, 50, or 100 syringes are used, unless volume discounts are obtained. Common types of variable costs include direct materials, direct supplies, and direct (contractual or hourly) labor.

DIRECT COSTS VERSUS INDIRECT COSTS Direct costs are charges that are incurred for a specific purpose. These costs can be easily traced in a cost-­effective (economically feasible) manner (Garrison, Noreen, & Brewer, 2017) to a specific cost object (e.g., program, project, product, activity, participant, student, patient) with a high degree of accuracy. For example, the production of a vaccine for COVID-19 would include two direct costs, namely, direct materials and direct labor. Similarly, in a service facility such as health care, these two direct costs are also incurred. Specifically, patient care includes costs such as drugs, diagnostic tests, days in rehabilitation, and nursing services. These costs can all be directly traced to each patient. Indirect costs include costs that are unintended, that is, costs that occur because of spillovers, multipliers, or by-­products, as well as costs that are intended. For instance, “a dam built for agricultural purposes may flood an area used by hikers, who lose the value of this recreation” (Kee, 2004, p. 509). Similarly, an entity or program may incur costs associated with negative externalities, such as environmental damage, that were not envisaged (e.g., an oil spill). More commonly, however, indirect costs refer to costs that are intended but that cannot easily be traced to a specific cost object (Garrison et al.,

Types of Costs and Outcomes

29

2017). Common examples include human resources (e.g., janitors, security guards, administration personnel), facilities (office space, laboratory space), office supplies (e.g., stationery), records (e.g., health, school), information technology (e.g., Wi-Fi, software), materials with a trivial cost that are used in patient care (e.g., bandages, cotton wool, syringe, gloves) or production (e.g., glue, nuts and bolts), and utilities (e.g., electricity, water, heat, telephone). Costs may be direct or indirect depending on the context. For example, utility costs generally are considered direct costs when allocated to an administration office. However, utility DIRECT INDIRECT costs can be indirect in a production facility or diagnostic laboratory in which multiple products or services are being Easily traced Cannot easily be produced or performed. Indirect intended to initiative or traced to initiative costs are often referred to as shared costs participant in or participant and are generally allocated on a share-ofquestion in question use basis by using a cost allocation or apportionment method. For example, indirect costs can be computed by taking a percentage of the total cost of the particular expense or by allocating cost based on square footage of space utilized. More specifically, if a community center houses three different programs (day care, drug counseling, job skills development) from 7:00 A.M. to 9:30 P.M. daily, then security costs might be allocated equally among the three programs (i.e., ⅓, or 33.33%), as all three programs are offered for the same time. Electricity costs might be allocated based on a percentage, with perhaps 50% of this cost being allocated to the job skills development program, as this program might utilize more electricity with its computers, sewing machines, food processors, woodworking equipment, and so on. The remaining 50% of community center electricity could be split as 20% to day care and 30% to drug counseling. Costs of heating from solar collectors or geothermal sources might be allocated proportional to the square area (facilities space) occupied by each program. Wi-Fi and 6G charges might be allocated 70% to the drug counseling program, 20% to the day care program, and 10% to the job skills development program. Wi-Fi and 6G charges could also be allocated based on minutes of Internet access and the number of calls from each program, with the fixed-cost portion being spread equally among the three programs. This would only be possible, however, if a log of the calls in each program is maintained. The method of apportionment chosen should be cost-effective, realistic, and justifiable. Evaluators should clearly document the apportionment

30 

  The Why, the T ypes, and the Tools

method used to compute indirect costs, as well as any assumptions that are used in measurement, so that decision makers can be properly informed.

CAPITAL COSTS VERSUS RECURRENT OR OPERATIONAL COSTS Capital costs are expenditures on tangible long-lived assets (a.k.a. fixed assets) with a life beyond a year (Persaud, 2007). These assets are used in business operations generally to generate revenue or to provide a service. In the literature, capital assets are described as assets that cannot be quickly converted into cash. However, it may be argued that valuable property can usually be sold instantly, albeit below market value in some cases. Thus a more appropriate definition might perhaps be tangible long-lived assets whose true market value may not be realized if cash is urgently required. Capital assets comprise land, plant, buildings, furniture, fixtures, equipment, vehicles, and machinery. In general, costs of most or all capital assets are incurred up front to get a business or initiative operational. Costs of capital assets include the purchase price plus all ancillary costs. For example, the installation of computer equipment requires special infrastructure (e.g., air conditioning units, network cables, security grills, alarm systems). Similarly, a generator may require a special storage shed. These ancillary costs are part of capital costs. In the entity’s accounting records, capital assets are expensed over their useful life via a process referred to as depreciation (i.e., a process that writes off a portion of the capital asset annually) to comply with the accrual method of accounting. In cost analysis, two approaches can be used to expense capital costs. One approach is to expense the capital asset in the period in which the expenditure occurs, whereas another approach is to expense the capital asset via an annual depreciation charge. Both methods are acceptable. However, only one or the other can be used, or this will result in double counting (Boadway, 2006). In contrast to capital costs, recurrent costs (a.k.a. operating costs) are costs that occur within each budget cycle and are consumed annually. Common recurrent costs include employee remuneration, rent, utilities (e.g., telephone, water, gas, electricity), materials and supplies, taxes, insurance, advertising, maintenance, and repairs. Maintenance includes all costs incurred to prevent deterioration of facilities and furniture and breakdown of vehicles, machinery, plant, and equipment. For example, in an educational establishment, maintenance costs would include repairs such as revarnishing furniture; repainting blackboards; servicing of photocopiers, computers, and other equipment; repairs to leaking roofs; painting faded buildings; maintenance of school grounds; and so on. Recurrent costs may include software and hardware upgrades and improvement costs.

Types of Costs and Outcomes 

OPPORTUNITY COSTS Numerous definitions exist for the increasingly popular term opportunity costs, some of which are debatable. However, embedded in most definitions is the concept of what is missed when choices are made to do one thing over another or to choose one alternative over another (Linfield & Posavac, 2019). Essentially, the concept of opportunity costs resides in the simple premise that there are insufficient resources to satisfy every need and that it follows that most, if not all, resources have alternative uses (Persaud, 2007). As such, “opportunity cost is the forgone benefit of options not chosen” (Persson & Tinghög, 2020, p. 301). Although the concept of opportunity costs appears to be relatively straightforward, in practice the consideration of opportunity costs tends to be quite neglected (Persson & Tinghög, 2020). This could result in flawed cost analyses, as the best options may be overlooked. Accountants, for example, are generally uncomfortable with this concept, because opportunity costs cannot be determined from an accounting system (Lothian & Small, 2003; Palmer & Raftery, 1999). This contrasts with economists, for whom this concept is fundamental to their view of costs (Persaud, 2007). Managers, for their part, only speak about opportunity costs when making decisions among monetary choices, whereas evaluators tend to think of opportunity costs from more of a nonmonetary perspective, although they rarely consider these costs. In too many cases, in social programs in particular, opportunity costs are often ignored, as they are intangible—­they usually cannot be “seen”—and thus are seldom considered in decision making. Opportunity costs, however, are not restricted to pecuniary costs. In most human services, for example, volunteer time represents a major resource for service delivery. In such cases, this time often represents an opportunity cost for the utility forgone by those persons who are not doing something else (e.g., lost time, potentially lost employment, forgone pleasure, and less time spent with family and friends). It is important for opportunity costs to be reflected in cost appraisals, even when no explicit cash transactions are involved (Kind, 2001; Treasury Board of Canada Secretariat, 1998). Opportunity costs have implications for both policymakers and private individuals. As such, the perspective of the cost study (see Chapter 5) is also important when considering opportunity costs. For instance, a social cost-­benefit analysis will need to consider opportunity costs from multiple perspectives. However, the program and individual perspectives are more restricted and may examine opportunity costs from only one perspective (Palmer & Raftery, 1999). For example, when a local authority uses land and capital to build a school or recreational park, this will be done at the sacrifice of some other societal good, perhaps additional housing, health clinics, businesses,

 31

32 

  The Why, the T ypes, and the Tools

or agriculture. Thus, if a new recreational park uses land that could otherwise be sold for $2 million to the local farm bureau, the use of the land for the park represents an opportunity cost, namely, $2 million, if the recreational park is not built. Therefore, this $2 million should be reflected in the proposed cost appraisal of the recreational park. Note that in applying the concept of opportunity cost, the land cost that would be included is the market-­determined value of $2 million, rather than the original cost of the land, which may have been only $1 million. Likewise, if a school implements a dropout prevention program that takes place at the school either during the school day or after school hours, the space in which it is held would have an opportunity cost if this space could have been rented out or if it has some alternative use. If the space has no alternative use, then the opportunity cost would be $0. Other opportunity costs associated with such a program would include the opportunity cost to students and parents who take part in the program. In cost-­inclusive evaluations, opportunity costs should always be discussed and, if possible, quantified for a truer, more complete reflection of program costs. The inclusion of opportunity costs is particularly important to ensure program sustainability. Specifically, it is important for decision makers to keep in mind the costs of “free” resources such as volunteered services in the event that these services become unavailable. Knowing true program costs is also crucial to accurately replicate a program. If opportunity costs are too difficult to quantify, they can at least be described qualitatively to provide a more valid, useful perspective on true program costs for informed decision making. In general, opportunity costs are estimated using either market or shadow prices (discussed further in Chapter 5). In perfectly competitive markets where there are no distortions and buyers and sellers have full knowledge of market conditions, market prices are a good reflection of opportunity costs (Persaud, 2007). However, in markets with distortions, market prices do not accurately reflect opportunity costs. In such cases, opportunity costs are computed using shadow or accounting prices (Bruce, 2000), which is a difficult and technical process. Alternatively, the costs associated with various inputs may be used to estimate opportunity costs for some types of programs (e.g., health care), as no market price may exist and shadow pricing may be too difficult (Palmer & Raftery, 1999).

SUNK COSTS In economics, accounting, and business, sunk costs are defined as incurred expenses (Persaud, 2007), that is, money already spent (Linfield & Posavac, 2019) that is irrecoverable (Persaud, 2018, 2020). Examples include costs

Types of Costs and Outcomes 

 33

such as prior excavation fees, prior legal fees, rent already paid, costs of policy development, feasibility studies, and research and development costs. In decision making, it is recommended that historical costs be ignored in present and future decisions as they are irrelevant (Roth, Robbert, & Straus, 2015; W ­ agner, 2020; Weygandt, Kimmel, & Kieso, 2010). The logic of this rule is based on the argument that if costs cannot be recovered, they cannot influence current or future decisions. Considering them would therefore distort analyses. For example, if equipment that was purchased 2 years ago (cost $20,000, current market value $5,000) is under review to determine whether it should be replaced, the decision should ignore the cost of the equipment. Only the future benefits that can be derived should be analyzed, as the present decision has absolutely nothing to do with the purchase 2 years ago. Therefore, only the opportunity cost of $5,000 is relevant, not the sunk costs of $15,000. The rationale for the exclusion of sunk costs has been summed up by the New Zealand Treasury (2015). According to the Treasury, “sunk costs are not included in an economic CBA because there is no opportunity cost involved and their inclusion may distort the analysis at hand by requiring a very high return on the investment” (p. 28). Specifically, present and future decisions need to be concerned with present and future costs and benefits (i.e., incremental costs and benefits). In practice, however, this simple rule is not always followed, even in situations in which it is highly applicable, and many managers fall prey to what are called sunk-cost biases (Sirois, 2019). Specifically, the literature suggests that the conversation on sunk costs is still quite fragmented and controversial. Indeed, rational decision making is often ignored, and cognitive biases creep into decision making, thus allowing inferior alternatives to be pursued based on significant retrospective investment (Dijkstra & Hong, 2019; Olivola, 2018), time, or effort (Jarmolowicz, Bickel, Sofis, Hatz, & Mueller, 2016). This irrational behavior can result in substantial losses if the initiative is pursued. For example, Linfield and Posavac (2019) explain that the Government Accountability Office recommended that the U.S. Postal Service billion-­dollar package-­sorting system be abandoned 2 years after its implementation, as its incremental future costs of operation far outweighed its benefits. In summary, sunk costs are appropriate to ignore in many programs for which a cost-­inclusive evaluation is conducted. Exceptions to the rule are cases in which sunk costs would need to be considered to get a true overall perspective on total program costs. For instance, in an ex-post appraisal, all costs (including sunk costs) would need to be considered (Kee, 2004). Additionally, if a study is being conducted to determine the worth of a program in relation to its critical competitors, sunk costs may also be relevant (Persaud, 2007). Also, when program replication is being contemplated, both retrospective and prospective costs are likely to be important.

34 

  The Why, the T ypes, and the Tools

TANGIBLE VERSUS INTANGIBLE COSTS AND OUTCOMES Tangible costs and tangible benefits2 are resources that can be easily quantified and converted into monetary units (Kee, 2004). In contrast, intangible costs and intangible benefits are those that either “cannot be quantified or are difficult to quantify in monetary terms” (New Zealand Treasury, 2005, p. 10). Resources can be intangible but still quite important, especially when evaluating the worth of a program. Intangibles can be divided into two major categories, namely, intangible assets and nonasset intangibles. Intangible assets are assets that do not possess physical substance and that are created through time or effort. Examples include goodwill, trademarks, copyrights, patents, franchises, and trade names. Nonasset intangibles, as the name suggests, are all other types of intangibles which either cannot be measured or cannot be easily measured (e.g., greater self-­esteem from learning to read, happiness, sadness). In contrast to intangible assets, the valuation of nonasset intangibles is considerably more complicated (Belli, Anderson, Barnum, Dixon, & Tan, 2001; New Zealand Treasury, 2005). In general, the recommended approach is to include intangibles in quantitative analyses whenever they can be reliably measured and converted into monetary units. Where intangibles cannot be quantified with any degree of accuracy, they should be excluded from the analysis, as their inclusion could be misleading. Intangibles excluded from quantitative analyses still should be described and considered in qualitative analyses if they are significant enough to affect decisions. In some cases, it may be useful to report two analyses, one excluding and the other including intangible costs and benefits (New Zealand Treasury, 2005). Alternatively, when intangible benefits are very difficult to monetize, it may be more appropriate to abandon cost-­benefit analysis in favor of cost-­effectiveness analysis (Kee, 2004; Rossi, Lipsey, & Freeman, 2004).

MONETARY VERSUS NONMONETARY COSTS AND OUTCOMES As the name suggests, monetary costs involve actual cash outflows (e.g., payment for supplies, materials, capital assets, services). These costs comprise direct and indirect costs and can be measured in monetary units such aforementioned, in this book we use the term outcomes, which comprise benefits (monetary) and effectiveness (nonmonetary). However, many government guidelines and other global institutions, such as the World Bank, simply refer to monetary versus nonmonetary costs and benefits. Note also that revenue or income generated would also be considered as a monetary benefit.

2 As

Types of Costs and Outcomes 

 35

as dollars or euros. Monetary outcomes (i.e., monetary benefits) involve actual inflows of cash to some entity (e.g., receipts from sales and services, participant fees, sale of capital assets, cash donations, government subventions or grants). Additionally, in human services, monetary outcomes may also include future savings in payments for services no longer needed or additions to future income. In contrast, nonmonetary costs and outcomes (i.e., effectiveness) do not involve actual cash. According to Scriven (2007), the “most common nonmonetary costs are space, time, expertise, and common labor (when these are not available for purchase in the open market—­if they are so available, they just represent money costs); PLUS, the less measurable ones—­stress, political and personal capital (e.g., reputation and goodwill), and immediate environmental impact—­which are rarely fully coverable by money” (p. 13). Building on Scriven’s definition, the time frame for environmental impacts needs to be extended beyond immediate impacts to cover both the medium and long term, as many environmental impacts are felt way beyond the immediate term. For example, the nonmonetary impacts of the Exxon Valdez oil spill in 1989 were grossly underestimated and extended far into the future. Likewise, the time frame for other types of impacts would also benefit from a continuum of immediate to longer term, as many impacts, such as health effects from something good or bad, may stay with us for life. Further, opportunity costs, donations, and volunteer time can also be classified as nonmonetary, as they entail no actual cash inflows or outflows. At this point, it is important to point out that when cost studies are being performed, “the same components may enter into the calculation as benefits from one perspective and as costs from another” (Rossi et al., 2004, p. 252) perspective. For instance, although donated goods and services3 do not incur actual monetary outflows4 from the program perspective, they are a cost from the communal or individual perspective. These costs may, however, need to be considered if the purpose of the financial analysis is to determine actual costs for program replication purposes and 3 Noncash

gifts represent a major source of support in many nonprofits. However, recording them to comply with tax reporting can be complex. In the United States, if the entity’s financial statements are prepared to comply with generally accepted accounting principles (GAAP; see Chapter 6) or are subject to an annual external audit because it is required by state law, then all noncash gifts should be captured and reported in the organization’s financial records. Even if the organization does not comply with GAAP, keeping detailed records is still useful for strategic planning and internal management (www.cfoselections.com/perspective/in-kind-­d onations-­a ccounting-­a nd-­reporting-­ for-­nonprofits).

4 Because no actual cash is expended for donated goods and services, they may be considered as benefits from the program perspective.

36 

  The Why, the T ypes, and the Tools

similar donated equipment and services are not forthcoming. Thus, if a community health center received donated medical equipment and volunteer services from local doctors and nurses, the equipment and services would need to be priced to obtain an accurate idea of the costs involved with program replication. The equipment could be priced using the going market rate for similar equipment, and the services could be priced using the professional rates for doctors and nurses. Additionally, as previously mentioned, effectiveness represents a nonmonetary outcome. For example, a smoking cessation campaign may cost $1,000,000 but save 1,000 lives; a program to prevent students from dropping out of School Y may cost $500,000 and prevent 10% of the students in School Y from dropping out of school; or a program for alcoholics may cost $200,000 but get 50% of alcoholics attending the program to abstain from alcohol consumption for a year. When costs and benefits are both expressed in monetary units, several cost-­analytical methodologies can be used to evaluate a program (see discussion in Chapter 4). However, when it is difficult to monetize certain types of outcomes (e.g., lives saved), or where no market value exists for the outcome because of its intangible nature (e.g., cost of happiness, greater self-­confidence, higher self-­esteem, good health), or if effectiveness data are the only type of outcome data that are available, then it may be more appropriate to use cost-­effectiveness analysis, which analyzes expenditure in relation to results achieved or obtained. This methodology is also discussed in Chapter 4. All things being constant, happiness, greater self-­confidence, higher self-­esteem, and good health should all translate into a more contented individual, who we would expect to be more productive in society. Thus, over the longer term, these intangible outcomes could eventually be monetized through greater productivity (employment), as well as other contributions that such individuals may make to society through taxes, greater spending, reduction on welfare systems, reduced reliance on health and mental health care services, and so on. Note, however, that such benefits are not of interest in a financial cost analysis of a specific program but would be of great importance in a social cost-­benefit analysis from the community perspective.

QUANTITATIVE VERSUS QUALITATIVE COSTS AND OUTCOMES In a cost-­inclusive evaluation, it is sensible to consider measuring the types, amounts, and monetary values of all resources used in a program, in addition to the effectiveness or benefits of the program, from multiple stake-

Types of Costs and Outcomes 

 37

holder perspectives, using both quantitative and qualitative analyses. Yet, most if not all nonmonetary costs and many nonmonetary outcomes often are not measured or even discussed, thus doing a great injustice to a cost-­ inclusive evaluation of an initiative. Classical economic appraisal focuses primarily on monetary inflows and outflows, as these resources must be recorded for tax reporting and funding compliance. However, because resources that are not typically considered “costs,” such as volunteer time and donations, and nonmonetary outcomes, such as greater self-­confidence or improved mental health, are not recorded in program records, these valuable resources and outcomes are frequently unreported. This risks underestimating the true costs and worth of a program or service. Many sophisticated methods can be used in cost-­inclusive evaluation for discerning the costs of specific activities in a highly reliable and valid manner, as detailed in this chapter and in Chapters 4, 6, and 7 of this volume. These methods can be adapted so that the cost-­inclusive evaluation includes perspectives other than those of the provider and funder (the two most commonly used in cost assessment). Resources devoted to or provided by participants, patients, or consumers of services also can be measured for their participation in the program as a whole and for specific activities of the program. For instance, consumer time and other consumer resources devoted to actual interactions with providers, “homework” prescribed by providers, and activities such as exercise and self-­medication also can be considered in cost-­inclusive evaluations of resource → activity relationships (see Figure 2.2). Some cost-­inclusive evaluations would include consumers’ weekly or daily time and transportation costs back to home or work as well. In the latter case, this could be estimated quantitatively, using mileage to and from the service delivery site. With respect to actual interactions with providers, self-­medication, and so on, these activities may be capable of quantitative measurement where lost wages were incurred or through qualitative expressions of relative worth of different resources captured in ratings.

OTHER TYPES OF COSTS Psychological costs encompass issues such as a participant’s ability to cope with major life events, daily stressors, hassles, and difficulties, and tolerance for treatment. These issues have major implications for one’s ability to cope in lieu of resorting to desperate coping mechanisms (Yates, 1999) and carry real and tangible costs that are often not recorded quantitatively and are not even discussed qualitatively. Few providers and consumers of human resources would deny the importance of tolerance and coping

38 

  The Why, the T ypes, and the Tools SERVICE PROVIDER COSTS FOR PTSD

PARTICIPANT COSTS FOR PTSD

Direct Monetary Costs

Direct Monetary Costs

• Counselor time with participant • Biofeedback equipment use for each participant • Therapeutic drugs for each participant

• Insurance costs or participant out-of-pocket costs for therapy • Transportation costs • Wages (if lost)

Indirect Monetary Costs

Nonmonetary Costs

Proration of overhead costs (e.g., space, utilities, communication, administrative staff, sundry expenses)

• Time cost for treatment for participant • Time cost for family member who accompanies participant (can also be a direct monetary cost if wages are lost) • Homework therapy time • Psychological cost of participant stress, tolerance for treatment, etc.

Nonmonetary Costs • Donated goods • Donated services • Volunteer time

Generally, participant costs and nonmonetary costs of service providers are completely ignored in cost-inclusive evaluations and not even discussed qualitatively.

  FIGURE 2.2   Resource assessment of treatment of participants with posttrau‑ matic stress disorder (PTSD).

strategies for stress as resources needed by consumers and providers alike for services to achieve the outcomes desired. These resources can both be described in words and rated according to the amount required by different treatment activities. Generally, participants who find treatment too stressful or stigmatizing are likely to stop treatment, possibly before the time and money spent have “paid off” in measurable effects. Attending sessions but not doing the recommended homework will also likely diminish the effectiveness of the treatment and any cost savings or increased income that might have been produced. The psychological nuances of these issues should therefore be assessed in the determination of the merit and worth of a particular treatment procedure. Similarly, bullying in schools can psychologically harm and negatively affect development in children and young adults. Low motivation and self-­

Types of Costs and Outcomes 

 39

esteem are often manifested in those who cannot afford counseling and may even affect those who do attend such sessions. In-house counseling may also not be fully utilized because of the stigma attached with visiting the school clinic. Educational policies geared at stamping out bullying may be comparatively more valuable than counseling programs to help victims of bullying. Examining costs and outcomes of alternatives may therefore be warranted in cost-­inclusive evaluations, as opposed to just evaluating the costs, cost-­effectiveness, and cost-­benefit of one particular treatment or service. In cost-­inclusive evaluation, we often tend to think of costs only in quantitative, monetary units such as dollars. However, as Yates (1980b) points out, psychological costs can be measured in a relatively reliable and valid manner using simple Likert-­t ype rating scales. For example, participants can be asked to rate their stress levels on a scale, with 1 representing low stress and the higher numbers representing higher stress, to gauge the psychological costs associated with major service components of a program. The ratings of the psychological costs versus benefits of different program components can then be used to evaluate treatment success (see Table 2.1). Financing by way of access to a line of credit at a local financial institution is quite normal in business operations of every sort. These interest charges and, in some cases, collateral issues should also be factored into cost-­inclusive evaluations. Insurance is a mandatory requirement for all business operations. It can include many different types of insurance, such as property insurance, insurance for the building’s content (e.g., furniture, equipment, supplies),

TABLE 2.1.  Measuring Psychological Costs of Specific Program Activities Rate your stress levels on the Likert type scale below for each service procedure that was a component of your treatment (1 = low stress, 10 = high stress). Service procedure

Low

High

Name of activity

1

2

3

4

5

6

7

8

9

10

Name of activity

1

2

3

4

5

6

7

8

9

10

Name of activity

1

2

3

4

5

6

7

8

9

10

Name of activity

1

2

3

4

5

6

7

8

9

10

Name of activity

1

2

3

4

5

6

7

8

9

10

Name of activity

1

2

3

4

5

6

7

8

9

10



40 

  The Why, the T ypes, and the Tools

health insurance for employees, and professional liability insurance. If only one program is being run, then all insurance costs will be assigned to the program. However, when more than one program is being run using the same facilities and staff, it may be necessary to prorate insurance costs to different programs using an appropriate apportionment method. Hidden costs refers to costs that may be overlooked or omitted in error, particularly when a program is being expanded (Posavac & Carey, 2003). For example, if a school building is being built in a tropical country, fans or air conditioning units, custodial expenses, security expenses, insurance costs, and fringe benefits may be overlooked in cost estimates. Hidden costs can also include costs not included in the purchase price of something. For example, computer equipment requires routine maintenance and hardware and software upgrades. Most hidden costs are eventually subsumed under recurrent or operating costs. Research costs geared at improving processes and procedures and thus the ultimate outcome of a program or treatment are typically not accounted for in cost-­inclusive evaluations. Yet these costs play a critical role in improving overall services. They should therefore be included at least in a qualitative discussion if too complex for a quantitative discussion, as their omission can lead to underreporting of resources used to affect positive change.

WHEN ARE DIFFERENCES IN COSTS AND OUTCOMES REAL? When different programs, or newer versus older versions of the same program, are compared, outcomes are a common focus. Outcomes advocated by providers, participants, and researchers commonly are suited to the immediate focus of the program(s), such as alleviation of depression, a decrease in drinking days per month, or improved scores on educational achievement tests. When including costs in an evaluation, it is increasingly common to include outcomes as well, such as increased days of employment that can be monetized (e.g., as increased earnings). Even reductions in health care and criminal justice expenses may be examined in an evaluation. Statistical tests for costs, benefits, cost-­effectiveness, and cost-­benefit are means by which many evaluators ask whether outcomes of one program are different from outcomes of another program. Whether simple (e.g., t-tests) or complex (e.g., 3-level hierarchical linear modeling), statistical analyses examine whether a difference in outcomes is or is not significant. The same statistics can test differences in costs, benefits, and indices of cost-­effectiveness and cost-­benefit. Common indices are cost ÷ effectiveness, which provides a cost-­effectiveness ratio (CER). In addition, simple

Types of Costs and Outcomes 

statistical tests5 can examine whether the average benefits ÷ costs, which provides a benefit/cost ratio for a program, is significantly better than the “break-even” ratio of 1.0 and whether the average net benefit is better than a “break-even” value of zero (benefits only canceling out costs). Challenges for statistical tests include statistical power: having sufficient data to be confident that differences found are “real” and that differences found to be statistically insignificant would not become significant with additional participants. Outcomes that appear to occur at the level of the program rather than at the level of individual participants, such as number of patients “cured” or number of patients falling ill, still can be examined for statistical significance between programs in comparisons of proportions. When statistical analyses examine monetary outcomes, that is, benefits, as well as costs and indices of cost-­effectiveness and cost-­benefit, preliminary analyses often show that common assumptions of many statistical tests, such as data having a normal (“bell curve”) distribution, may not hold. Monetary outcomes such as cost-­savings, costs, and indices calculated from these often have highly skewed or even multimodal distributions, due to high use of health or other services by a few participants and little or no use by most participants. In these instances, evaluators can compare programs using statistical tests that do not assume normal distributions (nonparametric statistical analyses such as chi-­square or Mann–­W hitney U) and statistical analyses designed for outcomes that are measured as events (e.g., survival analyses) or as falling into one of a limited possible number of outcomes (e.g., logistic regression). Effect sizes for costs, benefits, cost-­effectiveness, and cost-­benefit are increasingly required in quantitative analyses in addition to statistical significance, because a difference that passes a significance test may still not be meaningful. To be “real,” effect sizes6 are routinely calculated for measures of effectiveness and can be calculated similarly for measures of costs, benefits, cost-­effectiveness, and cost-­benefit. Effect sizes can be calculated in several ways (see Ellis, 2010). The results are both quantitative, that is, a specific and rather exact number, and qualitative, that is, corresponding to judgments such as small, moderate, or large (see Sawilowsky, 2009; ­Pallant, 2001). Cost per cure and per clinically significant change are additional measures of meaningfulness of a program-­produced difference. The for5 “Another

term for test statistic; that is, any of several tests of the statistical significance of findings. Statistical tests provide information about how likely it is that results are due to random error” (Vogt, 1999, p. 278).

6 “Any

of several measures of association or of the strength of a relation  .  .  . often thought of as a measure of practical significance” (Vogt, 1999, p. 94).

 41

42 

  The Why, the T ypes, and the Tools

mer emerged from the increasingly popular number needed to treat, or NNT (Kraemer & Kupfer, 2006). NNT provides a program-­level measure of outcome—­for example, for every three people who began the program, only one emerged with the basic skills targeted by the program. Although NNT is clearly a reflection of program outcome, programs’ costs can be incorporated to generate a similar index by dividing the total cost of a program by the number of participants to achieve at least a threshold level of effectiveness, sometimes referred to as the “cost per cure.” French, Yates, and Fowles (2018) provide an example of this index for two different programs for delivering parent training. Another common index of the “real significance” of program-­produced change is reliably and clinically significant change (RCSC; Jacobson & Truax, 1991). Average cost per participant could be divided by RCSC to show cost per clinically significant change.

QUESTIONS ABOUT COST AND OUTCOME DIFFERENCES “Is the change real? . . . meaningful? . . . worth it?” are questions familiar to program evaluators—­when the focus is on the outcomes of one or more programs. The same questions can be asked about costs, that is, “Is the cost substantial enough to be of concern? . . . Are the costs returned to society or other stakeholder groups in part, in full, or several times over, in future savings for services no longer needed, for example, or in future improvements in productivity, such as increased workdays or earnings?” It is curious, although common, that apparent differences in cost are considered “real” without those differences being tested statistically. Tell some evaluators that you think that one program fosters more days of employment than another, for instance, and the evaluators often will ask, “OK, but significantly more days? And, really, what is the size of that effect?” Inform those same evaluators that you think one program costs a small amount less than another, say, $10 less per participant with an average cost of $1,000 versus $1,010 per participant, and you will often hear, “Wow—that $10 will really add up with enough participants!” Of course, the $10 difference could simply be the result of variability in time spent in program activities or even time spent getting to and from program sites. Recruit and survey a few more participants in the same programs, and the difference could reverse just as easily as it could magnify or become nil. The problem seems to be that many of us treat costs differently from outcomes: Place a “$” “£,” or “¥” before a number, for example, and that number suddenly acquires far more face validity according to many stakeholders, including evaluators. If, however, costs reflect the monetary value of actual use of resources and use of resources differs between participants of a service who spent different time in treatment or who received differ-

Types of Costs and Outcomes 

 43

ent services from programs, there will be variance in costs—just as there is variance in effectiveness and other outcomes. Unless the evaluator has opted to simply estimate the cost per participant by dividing total cost by the number of participants seen, variability in costs can be examined statistically. In addition, apparent differences in program costs can be converted to effect sizes. The same can be done for monetary outcomes (benefits) and indices of cost/effectiveness ratios and benefit/cost ratios (e.g., net benefit).

DISTINGUISHING EFFECTIVENESS FROM BENEFITS Observable, reportable outcomes plus subsequent impacts on society typically are primary foci of evaluations. Funders, advocates for program consumers, consumers themselves, administrators, and especially providers all want answers to the common questions “Did it work?” and “For whom did it work best?” Costs of producing those outcomes, those impacts, are important to include in evaluations, as we have argued. As alluded to earlier, however, some outcomes have special meaning to many stakeholders, including funders, taxpayers, and families of consumers: reduced need for health services, reduced involvement in the criminal justice system, increased days employed, and augmentation of employment benefits such as health care, to name a few. These outcomes can, in turn, be translated into monetary units, such as funds saved by the health care system due to reduced need for services, funds saved by reduced participation in the criminal justice system, increased income generated by additional days employed, and reduced need to pay “out of pocket” for benefits now provided by an employer. Whether termed “monetary outcomes” or “social impacts,” these monetary outcomes are clear benefits targeted by, and hopefully generated by, some programs. When fees are charged for services or revenue is generated from sales, these cash inflows would also be considered as a benefit from a program or entity perspective. For example, if a local university is receiving a government subvention (i.e., financial support) for each citizen attending a particular graduate program and a cost-­benefit analysis is being performed for this graduate program, then this subvention would represent a monetary benefit. If the program being evaluated also included non-­nationals, then the tuition paid by these non-­nationals would be added to the government subvention to get total monetary benefits. Finally, in the case of the private sector, benefits would represent revenue earned from sales and services. The perspective of the cost-­inclusive evaluation and the type of organization is thus very important. Why, though, should evaluations consider monetary outcomes? Providers, consumer advocates, and program developers provide a ready

44 

  The Why, the T ypes, and the Tools

answer: to (hopefully) demonstrate that the program recovers some if not all or more than all of the resources consumed by it—for example, that it costs less, from a societal perspective, than initial funding suggests and that the program even could pay for itself and more. In this way, evaluators can help program managers answer questions of particular interest to many stakeholders, including, “How much does the program really cost?” and “Is it worth it?” That sort of evaluation finding is what can engage and maintain the attention of funders in a way that few other findings can. Including the benefits as well as the costs in program evaluation makes evaluation reports far less likely to be received with a “ho-hum” and a toss on the shelf (or in the “circular file” of the virtual trash can on a funder’s computer or the real trash can in a funder’s office). For these reasons, cost-­ benefit analysis is at least as important a method of cost-­inclusive evaluation as is cost- ­effectiveness analysis. We cover both in subsequent chapters.

SUMMARY This chapter discussed various classification systems for costs and outcomes, including fixed versus variable costs, direct versus indirect costs, capital versus recurrent or operational costs, opportunity costs, sunk costs, tangible versus nontangible costs and outcomes, monetary versus nonmonetary costs and outcomes, and quantitative versus qualitative costs and outcomes. The chapter also elaborated on commonly used statistical techniques for reporting real costs and outcomes. Some of these techniques distinguished between outcomes that are intangible (i.e., program effectiveness) versus outcomes that are tangible, that is, that would be reported in monetary units. Both costs and outcomes can be classified in several ways. For example, monthly wages would be considered as a direct cost and a fixed cost. Moreover, for accounting, wages can be classified as a recurrent cost, an operational cost, or an administrative cost, depending on the accounting system used by the program. Understanding classification systems for costs and benefits helps prevent duplication (“double counting”) of costs and benefits, which otherwise could lead to flawed cost analyses and poor management decisions. Chapter 2 also focused on how fixed and variable costs behave in response to different levels of program operations, such as increasing demand for services. This concept is particularly relevant for strategic planning and forecasting by both program administrators and program evaluators and is discussed in even more detail in Chapter 7. Depending on the nature of analysis and/or the perspective adopted for the study, certain types of costs and benefits may or may not be included in cost analyses. For example, an evaluation of a community program staffed

Types of Costs and Outcomes 

 45

with volunteers would likely not include the opportunity costs associated with the volunteer’s time, unless program replication was being contemplated. Nevertheless, it is generally recommended to still discuss these costs qualitatively in an evaluation report. Additionally, certain items may be considered as costs from one perspective but as a benefit from another perspective. For example, donated groceries to a food bank would be considered as a benefit from the program perspective but as a cost to the organization that donated the food. Moreover, if the donation was a one-time donation to the food bank, then the program would need to convert the donated food into a monetary equivalent for budgeting purposes for the next financial year, as this cost would then have to be borne by the organization. This chapter closes by noting that many cost-­ inclusive evaluations ignore nonmonetary costs and outcomes, thus doing great injustice to the cost-­inclusive evaluation of programs, as this essentially underestimates the true costs, outcomes, and overall value of a program.

46

THE WHY, THE T YPES, AND THE TOOLS

DISCUSSION QUESTIONS

(1) A Meals on Wheels program has monthly fixed costs of $20,000 and unit vari‑ able costs of $10 per meal. (a) Fill in the missing information in the table below. (b) Calculate the average meal cost for each activity level. (c) In groups of three, study the table and discuss what you have observed. Specifically, has the average meal cost increased, decreased, or remained the same? Engage in some discussion and figure out what is causing what you are observing. 1,000 meals

1,200 meals

1,300 meals

Unit variable costs Unit fixed costs Total variable costs Total fixed costs Total costs Average meal cost (2) Assume that you are a full‑time student pursing an evaluation degree. Your program duration is 1 year. What are the opportunity costs associated with your studies? If the program duration was 2 years instead, what would be the opportunity costs? Identify both tangible and intangible costs. (3) Identify two indirect benefits that a 10‑year‑old could derive from a reading program. Identify two indirect costs that residents may incur from a cement factory located 4 miles from a residential area. (4) Indirect costs and benefits can be very difficult to price. Still, it is recom‑ mended that they at least be discussed qualitatively. If you were conducting an evaluation of a reading program for 10‑year‑old children, do you think that the reporting of the indirect costs and benefits of the program could help with securing new funding? Why or why not?

C H A P T E R

3

Tools for Identifying and Measuring Costs and Outcomes and Other Issues for Consideration

I

n Chapter 2, readers were introduced to different classification systems for costs and outcomes. Understanding the different classification systems for costs and outcomes can be useful for different types of analyses. For example, distinguishing between fixed and variable costs, direct and indirect costs, opportunity costs, and sunk costs are fundamental parts of the cost and management accounting methodologies discussed in Chapter 7, discussion of capital versus recurrent costs is important for proper understanding of Chapter 6, and distinctions between tangible and intangible costs and outcomes, monetary and nonmonetary costs and outcomes, and opportunity costs made throughout this book are also important in cost analyses. Being able to sort costs in different ways for different purposes can provide new insights into the merit, worth, and value of a program, enriching cost-­inclusive evaluations. Still, having multiple classification systems at one’s disposal for evaluation can cause some confusion as well. Also, if categories overlap, double counting or entire omission of some critical program resources may occur. At the same time, it may be argued that if one understands that costs and outcomes can be classified under different categories, this may help to prevent double counting of costs or omissions, because knowledge provides wisdom and wisdom leads to proactive action to avoid certain pitfalls. Under- or overestimation of either costs or outcomes can severely affect cost analyses and can have serious repercussions when decision making is based on inaccurate data and initiatives are accepted or rejected, continued or terminated, or expanded or duplicated because of misleading data. This chapter discusses tools that can help to prevent over- or underestimation of



 47

48 

  The Why, the T ypes, and the Tools

costs and outcomes. The chapter also discusses other important issues that are pertinent to cost-­inclusive evaluations.

CHALLENGES WITH GATHERING COST DATA The biggest challenge when gathering cost data will come from your client’s resistance to buy into cost-­inclusive evaluation. If your client does not sanction such an evaluation, it may be virtually impossible to do even a rudimentary cost analysis. As mentioned in Chapter 1, some program administrators believe that releasing cost data may put funding at risk because a program may not measure up. However, keeping cost data concealed may be considerably riskier for programs. Your job as an evaluator is to show your client why is it important to do a cost-­inclusive evaluation and how a cost-­inclusive evaluation can help to get even more funding. Accountability and transparency are critical, especially when money is at stake. One of the best ways to provide accountability and transparency is by analyzing an initiative’s cost using one or more of the methodologies in this book. This can help program administrators to understand program costs better and why it is necessary to analyze these costs. If there are strong concerns that program funding may be terminated if costs are evaluated, then it may be advisable to fast-­forward to Chapter 7 to strategize how to offer a better quality service at reduced costs and serve more participants. The key to ensuring that your program is measuring up to your competitors is to thoroughly understand your program’s costs and how you can use this understanding and knowledge to make your operations more efficient. Another issue that may be of concern is the financial cost associated with cost-­inclusive evaluation itself. Program administrators and evaluators may be worried that the evaluation budget is not sufficient for a cost-­inclusive evaluation—­that a “cost analysis” would be “too costly.” Although this may seem paradoxical, it makes plenty of sense. Would not a cost-­inclusive evaluation that ignored its own costs be, by some accounts, hypocritical? The time frame for a cost-­inclusive evaluation also may cause concern. Admittedly, more data collection, for example about costs as well as outcomes, and perhaps about monetary as well as nonmonetary outcomes, can mean higher costs of an evaluation itself. However, by strategizing early, comprehensive data collection, including costs and possibly monetary outcomes, can be quite doable because these data will be collected and analyzed at the same time as the usual data on program activities and nonmonetary outcomes. In addition, much cost data often can be extracted from budgets and (even better) accounting records.

Tools for Identifying and Measuring Costs and Outcomes 

 49

The quality, credibility, and amount of cost data available also can be challenging in cost-­inclusive evaluation. This may be of more concern in smaller programs, in which accounting functions are performed by nonaccountants or even volunteers. The quality of costs and benefits data available can limit the types of cost-­inclusive analyses that can be performed for an evaluation. These challenges are discussed later, in the section “Why Budgets and Accounting Records Are Often Not Enough.” To be completely transparent, based on our experiences, the first cost-­ inclusive evaluation for a program will likely present the most challenges, especially if some stakeholders are resistant to cost-­inclusive evaluation. Future cost-­inclusive evaluations of the same program should be considerably smoother, as stakeholders will have had opportunities to put into place mechanisms for collecting costs and monetary outcomes data routinely. Also, after the first cost-­inclusive evaluation, program administrators typically are excited to see the findings of the next evaluation.

DOUBLE COUNTING AND ITS IMPLICATIONS Compared with monetary and nonmonetary outcomes, costs are much easier to identify and value for many evaluators. Nevertheless, identifying and valuing costs is rarely trivial for anyone and can prove to be a formidable, complicated task if one is not prepared with basic knowledge and skills. In addition to the common problem of incomplete data on costs, evaluators can easily encounter problems with double counting or duplication of some program costs. These two problems can have serious consequences, potentially continuing a modestly effective program for which costs have been underestimated or perhaps causing termination of a very effective program for which costs have, unfortunately, been overestimated. Double counting of program costs is likely by both novice evaluators and even more experienced evaluators new to cost-­inclusive evaluation. Duplication generally occurs when evaluators are not aware that costs can be classified differently, just as outcomes can be categorized differently. However, given that Chapter 2 discussed these classification systems, those pursuing cost-­inclusive evaluations should be better prepared. To illustrate, some programs may classify salaries under administrative expenses, whereas other programs may classify this expense under salaries. If a program has both classification systems on its books, and if accounting data were recorded by nonaccountants, data for salaries might be entered under both classifications! Or if an evaluator is not aware that salary data could be entered using different classification systems, salaries for this program could be underestimated, and administrative expenses could be overestimated by the exact amount. For instance, if monthly salaries of $10,000

50 

  The Why, the T ypes, and the Tools

were mistakenly entered under administrative expenses instead of salaries in a particular month and this was not detected, then salaries would be understated by $10,000 for the year, and administrative expenses would be overstated by the same amount. If budget cuts are necessary, budget cuts may be made for the wrong resources. Furthermore, salaries would then be greatly underestimated in attempts at program replication. Budget cuts may be made because of misclassification of expenses. Costs and outcomes can also be duplicated under different guises to stakeholders. For example, the New Zealand Treasury (2005) explains that if a new railroad is built to link two towns, it would be incorrect to count the increase in home values, the decline in travel time, and better access to shopping as separate outcomes, as the latter two have already been capitalized into the increased home property values. To avoid this type of duplication problem, evaluators need to have good insight about the types of costs and outcomes that could be incurred by different initiatives (Persaud, 2007). Reviewing literature on similar initiatives can usually provide good insight into the types of costs and outcomes that should be considered. Problems may also occur when costs are recorded as one total and not split among the different programs or services that are offered (Persaud, 2007, 2018). For example, electricity expenses for four separate programs may be entered as one total if all four programs are housed in the same building. If a cost study is then required to determine the feasibility or cost-­ effectiveness of, say, Program A, the evaluator may encounter difficulties in trying to isolate the electricity costs for Program A. In such cases, it may be necessary to use an apportionment method to determine the amount of electricity that should be allocated to each program. Thus, if all programs utilized an equal amount of electricity, it could be as simple as apportioning 25% of the total electricity costs to each program. However, if one program utilized more electricity, this would obviously not be a suitable method of apportionment. Suppose all four programs were community programs aimed at curbing juvenile delinquency by encouraging the youth to learn some skill, with Program A teaching carpentry skills, Program B teaching communication skills, Program C teaching dress etiquette skills, and Program D teaching reading skills. The apportionment of electricity to these four programs would obviously not be equal. In fact, it may be appropriate to apportion between 50 and 60% of the electricity to Program A, with the remaining electricity being apportioned equally among Programs B, C, and D. To summarize, cost-­inclusive evaluators should be familiar with the different classification systems to avoid potential duplication of either costs or outcomes. They can then be more proactive and vigilant in trying to ensure that duplication or double counting does not occur.

Tools for Identifying and Measuring Costs and Outcomes 

COSTS IDENTIFICATION TOOLS Cost studies need to be relatively precise, especially when costs and benefits are cumulative across participants of program services. Substantial over- or underestimations could result in flawed decision making, which could have serious consequences (Persaud, 2007, 2018). In cost-­inclusive evaluations, the identification of all relevant costs is important. This task may appear at first glance to be rather tedious, complex, and time-­consuming, especially if you are new to cost-­inclusive evaluation. However, by the time you complete this book, we hope to show you that cost-­inclusive evaluation is something that is quite doable. Additionally, it is fundamental to good evaluations. As previously mentioned, costs can be Upstream Impactees classified in many ways, which could pos(all other stakeholder sibly result in double counting or omissions. groups) To prevent such problems, it may be helpful Midstream Impactees to use some type of cost estimation tool. One (program staff) such tool is Scriven’s (1991) conceptual cost model, first illustrated by Davidson (2005) Indirect Downstream Impactees (program and subsequently modified by the first author, participants’ immediate Persaud (2007). This model identifies descripfamily, peers, and friends tive cost data on three dimensions: (1) type of who are impacted by the cost, (2) costs to whom, and (3) costs when ripple effect) (see Figure 3.11). Note that the costs to whom Direct Downstream dimension in Figure 3.1 uses terminology from Impactees (immediate Scriven’s (2015) Key Evaluation Checklist (i.e., program participants his nomenclature for impactees) and the costs or users) when dimension uses a four-phase traditional project life cycle. However, if you find that a COSTS TO WHOM different life cycle with more phases or differ(Scriven, 2015) ent labels would be more suitable, and if you 1 In

Figure 3.1, all three dimensions must be considered simultaneously. For example, ask the questions Were monetary costs for direct downstream impactees incurred in the preparation phase? Were monetary costs for direct downstream impactees incurred in the implementation phase? Were monetary costs for direct downstream impactees incurred in the operation phase? Were monetary costs for direct downstream impactees incurred in the termination phase? Then go to the next costs to whom category— indirect downstream impactees—and repeat the same questions linking to the costs when phase. Complete monetary quantifiable, then move to nonmonetary quantifiable and nonmonetary qualitative and repeat the same process. Nonmonetary quantifiable is very important when program replication is being considered. Nonmonetary qualitative is useful for enriching a cost-­inclusive evaluation report narrative. Keep in mind that some dimensions may not be applicable in your cost-­inclusive evaluation.

 51

52 

  The Why, the T ypes, and the Tools

* N TS TIO S A CO IFIC EL T D EN MO D I

CO

S ST

Donated Food from Supermarket for Food Feeding Program— Value $4,000 ($1,000 @ 4 weeks)

HE

W

Termination

Termination

Operation

Operation

Implementation

Implementation

Preparation

COSTS TO WHOM

N

Preparation

*

Termination Operation

Implementation

Preparation

Upstream Impactees

Upstream Impactees

Upstream Impactees

Midstream Impactees

Midstream Impactees

Midstream Impactees

Indirect Downstream Impactees

Indirect Downstream Impactees

Indirect Downstream Impactees

Direct Downstream Impactees

Direct Downstream Impactees

Direct Downstream Impactees

Monetary Quantifiable

Nonmonetary Quantifiable

Nonmonetary Qualitative

TYPE OF COST Monetary Quantifiable Actual expenditure for resources consumed.

Nonmonetary Quantifiable

Nonmonetary Qualitative

Items capable of being measured in money (e.g., donated goods and services).

Can enrich narrative in a cost evaluation and highlight worth of resources which have no market value or are difficult or controversial to value.

Very important when program replication is being considered.

  FIGURE 3.1    Costs identification model.

Tools for Identifying and Measuring Costs and Outcomes 

 53

prefer to list your stakeholder groups directly (e.g., program staff, participants), you can do so. The important point to keep in mind is that the classification system must make sense. Additionally, the classification should not be so ambiguous that there is potential overlap in costs, as this could result in double counting. It is also important to ensure that items are classified consistently. Three alternative formats in addition to Figure 3.1 are presented for conceptualizing program costs (see Tables 3.1, 3.2, and 3.3), as different evaluators may find it easier to understand and use a particular format. When counting costs, exercise due care and diligence. When thinking about costs, it is important to consider many different types of costs—­ monetary, nonmonetary (e.g., expertise, volunteer time), opportunity costs, social capital costs (e.g., decline in workforce morale), costs that TABLE 3.1.  Alternative Format for Costs Identification: Computer School Lab Fee–Paying Program Narrative

Monetary Quantifiable

Nonmonetary Quantifiable

Computer Hardware and Software (Itemize) Computers (Quantity × Price)



Donated Printers (Quantity × Price)



Scanners (Quantity × Price)



Software (Quantity × Price)



Computer Network Infrastructure (Quantity × Price)



Furniture and Equipment (Itemize) Air Conditioning Units (Quantity × Price)



Computer Desks (Quantity × Price)



Computer Chairs (Quantity × Price)



Whiteboards (Quantity × Price)



Administrative Expenses (Itemize) Electricity



Instructor Salaries (Number of Instructors × Salary)



Miscellaneous (e.g., Whiteboard Markers)



Training Manuals



Note. Keep in mind the perspective and purpose of the study. For instance, donated printers would only be reflected as a cost that needs to be priced if replication is being contemplated. Otherwise, it could be discussed qualitatively in the evaluation report to enrich the narrative and highlight the value of these consumed resources.

54 

  The Why, the T ypes, and the Tools TABLE 3.2.  Simple Cost Analysis of Juvenile Rehabilitation Skills-Building Program in Carpentry Program Costs for Year 202X

$

$

Personnel (Itemize) Instructional Staff

50,654

Administrative Staff

10,432

Maintenance Staff

4,561

65,647

Capital Assets (Itemize) Woodworking Equipment

6,633

Other Equipment

240

Furniture

632

7,505

Overheads (Itemize) Telephone Water Electricity Rental of Space Miscellaneous Expenses (Itemize)

2,000 500 3,298 12,000

17,798

250

250

Materials and Supplies for Participants (Itemize) Woodworking Materials Supplies (Nails, etc.) Total Costs

8,500 694

9,194 100,394

Number of Program Participants

950

Cost per Participant ($105.68)

106



occur intentionally or unintentionally, costs that occur directly or indirectly, costs incurred by different stakeholders (e.g., program participants, society at large), and costs that occur at different stages of the project cycle. Costs should be itemized using as much detail as possible (Scriven, 1991, 2015). To avoid overlap of costs, determine at the outset which stakeholders’ perspectives (see the section “Perspective for the Study” in Chapter 5) will be used to capture the types, amounts, and values of resources used by the program (i.e., costs; Persaud, 2007, 2018, 2020). These perspectives should match those being used to assess nonmonetary outcomes of

Tools for Identifying and Measuring Costs and Outcomes 

 55

the program, that is, its effectiveness. For instance, if program replication is being contemplated, nonmonetary costs would need to be placed under nonmonetary quantifiable in Table 3.1 to accurately reflect program costs. However, if the evaluation is being conducted to satisfy funding requirements and for accountability, nonmonetary costs can be described in a qualitative manner, such as hours of time volunteered by persons with a particular expertise. Keep in mind that costs—the resources or “ingredients” that a program uses—­should be specified as precisely as possible to ensure accuracy in valuation. For example, personnel resources and corresponding costs should be split into full versus part time and relevant skill sets often reflected in academic degrees, certifications, government personnel categories, or rank—for example, doctor, nurse, therapist, graduate student, clerical, administrative. The most effort in categorizing and determining the monetary value of different resources should be in proportion to the overall contribution of the resource to total program costs (Persaud, 2007, 2018) and outcomes. In other words, invest more time in the cost categories that consume the largest share of the entity’s budget and seem most likely to determine its outcomes. For instance, in many programs, it is personnel costs that require the most attention and differentiation for accurate costing and/or replication. Specifically, remuneration is the largest budget category in most educational institutions (e.g., University of Arizona). Therefore, evaluators should devote more time to ensuring that salaries are accurate and that the monetary value of health care and other benefits are included, rather

TABLE 3.3.  Alternative Format for Costs Identification: Service-Related Activities Service-Related Activities (Participant Name) Direct Services Activity 1 Activity 2 Activity 3 Activity 4 Activity 5 Activity 6 Activity n  

Jan

Feb

Mar

Time Period n

Total

56 

  The Why, the T ypes, and the Tools

than spending substantial time in trying to categorize and cost out office supplies in detail. “People—­not paper clips” might be the motto here for focusing efforts in cost-­inclusive evaluation. This practice also can maximize the accuracy of cost-­inclusive evaluations. Office supplies generally represent less than 1% of program budgets. A 100% overestimation error in office supplies of $3,000 would be comparatively negligible, only overstating total costs by $3,000, but a 15% overestimation error in salaries and benefits totaling $1,000,000 would overstate total costs by $1,000,000 × 15% = $150,000.

OUTCOMES IDENTIFICATION TOOLS Cost-­inclusive evaluations should attempt to identify all program outcomes derived, if possible and feasible. However, like costs, it is quite easy to duplicate or omit outcomes. In addition, just as monetary and nonmonetary resources can be distinguished, so can monetary and nonmonetary outcomes. Substance abuse treatment and other programs that reduce use of health or criminal justice services can be evaluated not only by counting the number of each type of service reduced but also in terms of the savings from health service costs avoided. Similarly, mental health programs that return participants to employment, increase the number of days worked, or increase the income earned can be evaluated not only by the additional days of work generated but also by the additional salary and benefits accrued. Due care and diligence need to be exercised when identifying and valuing program outcomes, as decision making based on misleading data can have serious consequences, especially when over- or underestimations are large (Persaud, 2007, 2018). Figure 3.2 presents an outcomes identification model developed by the first author (Persaud), which can aid with the identification of relevant monetary outcomes (i.e., benefits) or nonmonetary outcomes (i.e., effectiveness) for your cost-­inclusive evaluation. The outcomes identification model is quite similar to the costs identification model shown in Figure 3.1 in that it uses the same three dimensions. However, the labels for the when dimension in Figure 3.2 reflect the timing of the outcomes rather than the program life cycle used in Figure 3.1. An alternative approach would be to use a table format instead (see Tables 3.4 and 3.5). Some evaluators may find this format easier to navigate. Regardless of the approach used, exercise care to ensure that all outcomes are properly captured for your cost-­inclusive evaluation. If too expensive or difficult to quantify, discuss in the evaluation report, using qualitative narrative instead.

Tools for Identifying and Measuring Costs and Outcomes 

 57

Benefits

EXAMPLES OUTCOMES

Effectiveness RURAL PRIMARY SCHOOL READING PROGRAM Direct: Increased reading ability in children who attend the program. Indirect: Children who attend the program teach their siblings to read. CAPITAL ASSET SALE/DISPOSAL Direct Monetary Benefit Salvage value from sale of capital assets. Note that the asset may be worthless at end of life and costs may be incurred for disposal of the asset so this would be reflected as a direct monetary cost instead. BEHAVIORAL EFFECTS Behavioral effects that cause positive behavior changes in consumers. Generally difficult to quantify but can be discussed qualitatively. REVENUE OR INCOME All sources of revenue or income are monetary benefits. GOVERNMENT SUBVENTIONS Payment of university tuition fees for residents.

COST AVOIDANCE OR COST SAVINGS Indirect Benefit Improving systems or processes often incur significant direct costs. However, the benefits from cost avoidance or savings should at least be discussed qualitatively. DONATED GOODS/SERVICES Tangible nonmonetary benefits.

CONFIDENCE (Intangible benefit) REGULAR CHIROPRACTIC CARE • Better sleep • Fewer drugs and hospital visits • Stronger immune system • Less pain

58 

  The Why, the T ypes, and the Tools

* ID OU EN TC T O MO IFIC ME DE ATI S L ON

Increased student motivation and confidence from learning to read

N

Long-Term Long-Term (>3 years) (>3 years) Medium-Term Medium-Term (1–3 years) (1–3 years)

HE

W S E M CO Short-Term T (< 1 year) OU Immediate

OUTCOMES TO WHOM

(Now)

Medium-Term (1–3 years) Short-Term (< 1 year)

Short-Term (< 1 year) Immediate (Now)

Long-Term (>3 years)

Immediate (Now)

Upstream Impactees

Upstream Impactees

Upstream Impactees

Midstream Impactees

Midstream Impactees

Midstream Impactees

Indirect Downstream Impactees

Indirect Downstream Impactees

Indirect Downstream Impactees

Direct Downstream Impactees

Direct Downstream Impactees

Direct Downstream Impactees

Monetary Quantifiable

Nonmonetary Quantifiable

Nonmonetary Qualitative

*

TYPE OF OUTCOME Monetary Quantifiable Actual revenue or income (e.g., cash donations, participants’ fees, funds saved from reduced health care needs, etc.)

Nonmonetary Quantifiable Outputs capable of being measured in numbers (e.g., number of lives saved, number of program participants who successfully completed treatment or training)

  FIGURE 3.2    Outcomes identification model.

Nonmonetary Qualitative Useful when it may be too difficult or expensive to quantify some types of effectiveness data (e.g., spiritual well being of restoring lands to Native Americans)

TABLE 3.4.  Alternative Format for Outcomes Identification Type of Outcome

Outcomes to Whom

Outcomes When

Narrative for Itemization

MONETARY QUANTIFIABLE

Direct Downstream Impactees

Immediate Medium Term Short Term Long Term

Indirect Downstream Impactees

Immediate Medium Term Short Term Long Term

Provide detailed itemization in money for each impactee group linking to each Outcomes When criterion, i.e., follow similar process to that explained in footnote 1 on p. 51.

Midstream Impactees

Immediate Medium Term Short Term Long Term

Upstream Impactees

Immediate Medium Term Short Term Long Term

Direct Downstream Impactees

Immediate Medium Term Short Term Long Term

Indirect Downstream Impactees

Immediate Medium Term Short Term Long Term

Midstream Impactees

Immediate Medium Term Short Term Long Term

Upstream Impactees

Immediate Medium Term Short Term Long Term

Direct Downstream Impactees

Immediate Medium Term Short Term Long Term

Indirect Downstream Impactees

Immediate Medium Term Short Term Long Term

Midstream Impactees

Immediate Medium Term Short Term Long Term

Upstream Impactees

Immediate Medium Term Short Term Long Term

NONMONETARY QUANTIFIABLE

NONMONETARY QUALITATIVE

Items capable of being measured in numbers (i.e., effectiveness). Examine the effort and cost to do this versus just reporting qualitatively.

This can enrich costinclusive evaluations. Useful when it is too difficult or costly to quantify certain types of outcomes.

Note. Outcomes are program specific. Your specific program may not have outcomes for all impactees in each of the three dimensions.



 59

60 

  The Why, the T ypes, and the Tools

TABLE 3.5.  Alternative Format for Outcomes Identification: Computer School Lab Fee–Paying Program Narrative

Monetary Quantifiable

Nonmonetary Quantifiable

Nonmonetary Qualitative

Benefits (Money) Classes: Community Children (Participants × Fee)



Classes: Community Adults (Participants × Fee)



Salvage Value: Sale of Capital Assets



Effectiveness (Numbers) Number of Children Who Completed Program



Number of Adults Who Completed Program



Effectiveness (Qualitative Description Only) Increased Confidence from Being Computer Literate



Note. Keep in mind the perspective and purpose of the study. Note that if the computer lab is to be used exclusively for the children who attend the school, then revenue generation from outside classes would not be applicable, and effectiveness data would be the schoolchildren. If instead computer classes are being offered to outside participants, the revenue earned would reflect the time period for which the program is being evaluated. Thus, if the program’s life is 5 years and the program is being evaluated at the end of the period, the revenue earned would be for the 5-year period. However, if the program was being evaluated at the end of Year 1, the revenue would reflect only revenue earned in Year 1. Likewise, resources consumed would reflect only Year 1 costs. Also, if the program is being evaluated in Year 1, salvage value of a capital asset would not be applicable, as assets are salvaged at the end of their useful lives. If actual cash inflows are received, several of the economic appraisal methods discussed in Chapter 4 could be used to evaluate this program. However, if no cash inflows are received, only a simple cost-effectiveness analysis of this program would be possible. For instance, if the program is being evaluated at the end of the 5-year period and only children used the program, divide total monetary costs in Table 3.1 by total students who used the lab.

MACRO‑, MESO‑, AND MICRO‑LEVEL PROGRAM OPERATION AND EVALUATION As defined by Wholey, Hatry, and Newcomer (2010), “a program is a set of resources and activities directed toward one or more common goals” (p. 5). In addition, however, different programs operate at different levels of specificity. Individual therapy with one person could be considered a “program,” for example, but a rather specific, micro-level program; most, if not all, activities of therapy focus on one individual, the participant. Other programs are far more macro in that they are regional, national, or inter-

Tools for Identifying and Measuring Costs and Outcomes 

 61

national in scope, in the focus and conduct of activities, and in outcomes desired. Examples of macro-level programs include most efforts to reduce air pollution, improve water quality, or control an epidemic. Between micro programs and macro programs lie most of the programs we evaluate. At this in-­between or meso level, there might be initiatives, for instance, that target an important set of health problems, such as cardiovascular disease, and for persons in a particular community. At macro, meso, and micro levels, there are substance abuse prevention programs, education efforts, and criminal justice programs for disadvantaged youth. The concept of a continuum from more to less specific program activities can be expanded to help plan, solicit, and describe findings of evaluations, especially when evaluating costs as well as outcomes. For example, costs and outcomes of providing social services to children and families can be measured for a citywide program (meso-level cost assessment, most likely), for each individual child in the city (micro-level cost assessment, certainly), or for an entire country (macro level, for sure). Some evaluators may attempt very simple cost measurement by simply dividing total costs by the number of participants served. The same can be done for some program-­level outcomes, such as the number or percentage of students graduated or patients “cured.” These macro-level-only approaches to evaluation minimize opportunities for more sophisticated statistical analyses, as explained in later chapters, and for more formative, improvement-­oriented evaluation (Scriven, 1967). Evaluating costs and outcomes not just at the program level (total costs, total outcomes) but also at individual and group levels allows understanding of the variability in costs and in outcomes between individuals. Evaluating costs and outcomes at group as well as individual levels allows costs and outcomes of serving different types of participants—­older compared with young, Black compared with White, women compared with men, for example—­to be measured separately. This can help answer questions about possible differences in costs, as well as possible differences in effectiveness and benefits for different groupings of participants. Finding differences in costs and outcomes for different types of participants can generate concern and heated discussion about inequities but also can help resolve those inequities. How otherwise can a program reduce or eliminate differences in outcomes or costs if they have not been evaluated—­for example, measured before, during, and after efforts to resolve those inequities? Also, although a program can operate on a national or global level, its evaluation can be conducted at more specific levels. Meso- or even microlevel evaluation of a macro-level program is possible, for example, and can be preferable. Treating or draining ponds in a specific forest, for instance, are micro-level interventions for eliminating Guinea worm transmission, along with provision of safe drinking water and education about symptoms

62 

  The Why, the T ypes, and the Tools

and treatment (see Carter Center, n.d.). This program can be thought of as being implemented with macro goals (e.g., improved national health and productivity, freeing up of health care resources) and macro activities (e.g., funding of a collaborating center, implementation in all countries infested by the nematode). Meso activities are funded for specific regions of different countries, and the success of those activities would be evaluated for different regions. Changes in behavior and occurrence of infection despite program activities would occur at the micro level of individual water sources and individual human hosts. Evaluation of outcomes and of costs will likely occur at multiple, and perhaps all, levels, given that resources are used for eradication program activities at each level and that outcomes occur at multiple levels as well.

WHY BUDGETS AND ACCOUNTING RECORDS ARE OFTEN NOT ENOUGH Budgets and accounting records summarize monetary units of measurement (i.e., inflows and outflows of money). They thus provide a good starting point for extracting costs and possibly monetary outcomes (benefits) data for some types of after-the-fact (ex-post) cost studies (Persaud, 2007, 2018, 2020). However, budgets and accounting records may not capture information on many types of resources consumed by a program, for example, volunteered time and donated resources, such as free or public facilities (Persaud, 2021). Understanding the types and amounts of such resources is important when program replication is being contemplated. In other cases, accounting records generally will not provide information about revenue generated by participants if that is not the focus of the program. For example, a mental health service that provided participants with assertiveness training, which then empowered them to reinvoice their customers for work completed but for which payment was never obtained, is not captured in accounting records. Budgets and accounting records also do not record sunk costs, opportunity costs, and other intangible costs. Important intangible outcomes of programs, such as increased confidence and enhanced self-­esteem, are therefore not recorded. In cost-­inclusive evaluations, data for estimated costs and outcomes need to be relevant to the program under consideration. Thus it would be inappropriate to assume that costs and monetary outcomes from a particular program would be the same for another program unless the program is identical in all respects. If this is done, serious over- or undercounting of costs and benefits can result (Mohr, 1995; New Zealand Treasury, 2005). Finally, the validity of budgets and accounting records should not be assumed inviolable. Both can contain errors of transposition or misclas-

Tools for Identifying and Measuring Costs and Outcomes 

 63

sification, missing or incomplete data, inconsistencies, and lump sums that make it difficult to assign costs to specific activities of a program (Persaud, in press). For example, if electricity is recorded in the accounting records as $100,000 for Year 202X and five separate programs are being administered, it would likely be helpful to the evaluation to understand how much electricity was or would be consumed by each program.

ETHICS AND COST‑INCLUSIVE EVALUATION In conducting evaluations, evaluators are likely to encounter many ethical and moral challenges and dilemmas (Persaud & Dagher, 2020). These challenges are inevitable (Buchanan & MacDonald, 2012; Morris, 2015; Persaud, 2021; Royce, Thyer, Padgett, & Logan, 2001) and will likely be considerably exacerbated in cost-­inclusive evaluation. We have seen those who care deeply about collecting, analyzing, and reporting data on outcomes of programs pivot all too pragmatically when costs of programs are evaluated. We also have experienced notable prejudice against reporting program net benefits (benefits minus costs) that were negative, that is, when costs exceeded benefits. Irrespective of the type of evaluation, evaluators should anticipate and plan to deal with all types of challenges, so that their work can be seen as credible (Fitzpatrick, Sanders, & Worthen, 2004; House, 1993). According to Morris (2005), the domain of ethics is concerned with “issues of moral duty and obligation, involving actions that are subject to being judged as good or bad, right or wrong” (p. 131). Yet ethics can be rather nebulous (Kant, 2018), and standards can be interpreted quite differently by different individuals. In fact, moral issues can become obscured by Wrong Right economic, professional, and social pressures. Rules of ethics that should Fairness Equity govern “behavior and attitudes based on the doctrine of prima facie equal Choice rights” (Scriven, 1991, p.  134) may often be conveniently ignored. Dilemma In cost-­inclusive evaluations, the determination of “right” and “wrong” Values e Complianc can be more complex than expressing a strong belief in conducting ethical Morals evaluations, regardless of type. Our Standards experience shows us that cost-­inclusive Justice evaluations require considerable judgment and sensitivity when using com-

ETHICS

64 

  The Why, the T ypes, and the Tools

plex, sometimes ambiguous, and often incomplete data. Ethics in cost-­ inclusive evaluation goes far beyond ensuring due diligence and care in collecting data on costs and outcomes. It encompasses not succumbing to political pressures to juggle figures in the interest of the program, consumer advocate, or funder. It requires using sound professional judgment to determine which costs and outcomes to include and how to value costs and outcomes (Persaud, 2007). For instance, concerns have been raised on the fairness of valuing disenfranchised groups equally alongside other groups. Questions have been raised on the ethics of trying to assign a value to a human life. An even more contentious discussion arises when life is valued using demographics based on age, economic status, or gender. As alluded to, the evaluator’s responsibility for providing accurate and comprehensive data is important to facilitate decision making that is rational and sound. Still, ethical controversies can occur even when the data are accurate. For instance, although a cost-­effectiveness analysis can aid choices among competing programs with similar objectives, it is a considerably more complex task to make decisions when competing programs have widely disparate objectives. In the latter case, concerns with equity and fairness may often surface, especially when all competing needs are equally worthwhile and the constituencies they benefit are equally important (Linfield & Posavac, 2019; Pinkerton, Masotti-­Johnson, Derse, & Layde, 2002). In other instances, criticism may be leveled against cost-­ inclusive evaluation because some stakeholders fear it. This fear may be expressed as beliefs that allowing costs, monetary outcomes, or net worth to figure into an evaluation of human service programs is unethical, impractical, or immoral. For instance, some clinicians involved in mental health care “resist the idea that clinical decisions should be guided by economic considerations instead of the needs of the patient” (Berghmans, Berg, van den Burg, & ter Meulen, 2004, p. 146). However, another position is that it may be unsound, as well as unethical, to ignore the just distribution of scarce resources, as the pursuit of one initiative over another carries an opportunity cost (Williams, 1992). Moreover, there are indeed many alternatives that can be equally effective at doing the same thing for much less cost, and thus economic considerations must be taken into account. Additionally, it could be argued that if it is unethical to take costs into consideration, then we would never be challenged or motivated to find better and cheaper ways to do things. Indeed, we have argued that by funding programs that require fewer resources per participant served effectively—­by “delivering the best to the most for the least” (Yates, 1996, p. 2)—a greater number of consumers can participate in programs and experience positive outcomes. Cost-­inclusive evaluators have a moral obligation to do their work well in compliance with high ethical standards to ensure that scarce resources

Tools for Identifying and Measuring Costs and Outcomes 

 65

are optimized for societal good, just as all evaluators do. Yet consequences of doing our job well can involve challenging the status quo and political dynamics that could lead to retribution (Smith, 2002). Still, it is crucial to recognize that ignoring the moral consequences of ignoring costs or exaggerating outcomes can do great harm and contribute significantly to injustice. Cost-­inclusive evaluators, therefore, have important roles to play in ensuring justice and equity for all.

COMMON TRAPS AND PITFALLS If you are reading this book, it is because you are motivated to learn more about cost-­inclusive evaluation. However, to get a good overview of the many issues that you need to consider in cost-­inclusive evaluation, you need to read the book in its entirety. Nevertheless, this section highlights a few issues that you should keep in mind to ensure that your cost-­inclusive evaluation is credible and useful, as a cost-­inclusive evaluation that is neither will not be used. Evaluations require considerable time and effort, which carries a cost. Therefore, our goal must be always to produce an evaluation report that is helpful and useful for informed decision making. „ Many clients may be uneasy when you suggest a cost-­inclusive evalu-

ation. Make sufficient time to properly articulate why this type of evaluation is better and use examples to illustrate how cost-­inclusive evaluation can help your client. „ Involving a variety of stakeholders in an evaluation from the beginning can be even more essential for cost-­inclusive evaluations than for other evaluations. Providers, participants, advocates, managers, funders, and regulators all care deeply about the resources they devote to a program. These and other stakeholders often have unique and important information about the resources needed for and used by programs to “make it work,” as well as the value of those resources. Just as neglecting any stakeholder group when evaluating program activities or outcomes can be a serious mistake in any evaluation, ignoring their perspectives on the types, amounts, and monetary values of resources used by the program could invalidate or sabotage a cost-­inclusive evaluation (see Chapter 5; cf. Yates, 2012). „ Keep in mind your time frame and evaluation budget, as both have implications for the type of study that is realistically feasible and practicable. „ Cash inflows and outflows must be discounted to take account of the time value of money when economic appraisal methods are being used

66 

  The Why, the T ypes, and the Tools

(see the section “Time Preference and Discounting” in Chapter 4). This is particularly critical when initiatives have a life of more than a year. Not discounting your cash flows can considerably distort the value of your cash flows. „ The discount rate selected is important, as discount rates can have a huge impact on your cash flows (see the section “Discount Rate Choices and Their Impact on Analyses” in Chapter 5). This rate must be carefully chosen, and a justification must be provided for its choice. „ Ensure that you understand the pros and cons of the various economic appraisal methodologies that can be used in a cost-­inclusive evaluation (see the section “Advantages and Disadvantages of Different Economic Appraisal Methods” in Chapter 4). Also ensure that the methodology selected is suited to the needs of decision making. „ Be open to the use of methodologies from cost and management accounting (see Chapter 7). „ Ascertain the type and quality of data that are available, as this will determine the methodology that can be used, as well as the sophistication of the analyses. „ Review the literature on similar programs to see what types of costs and outcomes were included in the analysis. You may need to make comparisons to other studies in your report. „ Avoid methodologies that are “trendy” but not sound. Ask yourself if it is the correct methodology for your particular cost-­inclusive evaluation. „ Always try to use both quantitative and qualitative analyses in cost-­ inclusive evaluation. Qualitative analyses are often ignored but can really enrich the discussion. „ A cost-­inclusive evaluation should ideally consider critical competing programs and alternative interventions that could achieve the same or similar results. Scriven (2015) recommends looking at a more expensive option, a less expensive option, and an option with similar costs. „ Always document the assumptions made in an evaluation, so that readers can understand and independently replicate your computations. Also, note that if comparisons are being made with other critical competitors, then the assumptions used must be similar to those of your critical competitors. „ Avoid controversial valuations that can discredit your report. For example, valuations placed on life can be exceedingly controversial. It may

Tools for Identifying and Measuring Costs and Outcomes 

 67

be better to use a methodology such as cost-­effectiveness analysis to avoid this problem. „ Keep in mind when you are considering replication that cash inflows and outflows may need to be adjusted for inflation.

SUMMARY Overestimating or underestimating costs or outcomes can produce flawed cost analyses, leading to incorrect guidance for administrative actions. Double counting or complete omissions of either costs or outcomes can similarly produce inaccurate evaluation findings. If costs are underestimated, a program that is only modestly effective could continue to be funded. Worse yet, if costs are overestimated, funding for a beneficial program could be terminated. Chapter 3 also highlights the importance of understanding the many classification systems (discussed in Chapter 2) that can be used for monetary costs and benefits. Understanding nuances of different cost and benefit classifications can prevent over- or underestimation, double counting, and complete omissions of critical costs and benefits. Two tools can identify and measure costs and outcomes to avoid the aforementioned problems: (1) the costs identification model and (2) the outcomes identification model. Both models classify costs and outcomes on three dimensions: (1) type (monetary quantifiable, nonmonetary quantifiable, nonmonetary qualitative), (2) whom (the various program stakeholders), and (3) when (time period). Variants of these models were also presented using a table format for easy understanding and navigation. The importance of considering nonmonetary quantifiable and nonmonetary qualitative costs and outcomes of programs is emphasized, as such discussions enrich evaluation reports and address interests of many persons served or otherwise affected by the program. This chapter also highlights challenges with collecting cost data, including client resistance. The chapter explains why budgets and accounting records often are insufficient to tell the true story of program resources, activities, processes, and outcomes. Different levels of specificity (macro, meso, micro) in costs and outcomes often need to be considered for comprehensive cost-­inclusive evaluations. Ethical dilemmas that can occur when conducting cost-­inclusive evaluations are explored in Chapter 3 as well. Good judgment and sensitivity to cultural concerns are requisites for effective cost-­inclusive evaluation. The chapter ends by exploring common traps in cost-­inclusive evaluation, providing advice on how to avoid these pitfalls.

68

THE WHY, THE T YPES, AND THE TOOLS

DISCUSSION QUESTIONS

(1) Over‑ and underestimation of costs and outcomes can lead to flawed cost analyses. To understand this concept, answer the following questions: (a) Assume that your correct program costs are $100,000 and participants equal 100. Calculate the cost per participant. (b) Assume that you have overestimated program costs by $20,000. Calculate the cost per participant. (c) Assume that you have underestimated program costs by $10,000. Calcu‑ late the cost per participant. (d) Discuss how over‑ and underestimations can affect decision making. (2) You have embarked on a community initiative in which you will be giving out protective COVID‑19 equipment to residents of your neighborhood. Working in pairs, identify the various types of costs that would be incurred in this initia‑ tive. Now compare your responses. Discuss as a class whether your costs are realistic. Present your information using Table 3.1 as a guide. (3) This chapter highlighted that costs and outcomes can be classified as nonmon‑ etary and quantifiable and as nonmonetary and qualitative. Think of a program and identify two costs and two outcomes that may be classified as nonmon‑ etary quantifiable and nonmonetary qualitative. Discuss the value‑added that can be derived from presenting this information as part of your cost‑inclusive evaluation report. (4) In groups of four, discuss four ethical issues that could arise with cost‑inclusive evaluation. How would you address these concerns so that your cost‑inclusive evaluation is not criticized?

PA R T I I

Adapting Economic Methods to Enhance Cost‑Inclusive Evaluation

C H A P T E R

4

Economic Appraisal Methodologies

T

his chapter introduces several economic evaluation methods that can aid good program management, as well as good program evaluation, to help programs do more with less. Economic appraisal can encompass a wide array of techniques. The main types are described and illustrated with examples in this chapter. Each type can provide unique insights for decision making, and all methods are suited to cost-­inclusive evaluation if the data needed for the particular methodology are available to permit the particular type of analysis. Economic evaluations can range in sophistication from being highly complex (e.g., a full-­f ledged cost-­benefit analysis conducted from a societal perspective) to simpler analyses used for internal decision making (­Persaud, in press). Economic appraisal methods have been used for decades in the private sector to choose among investment options that would maximize profitability and create wealth maximization. These techniques have started to gather momentum in the public and nonprofit sectors in many developed countries as a key performance indicator for demonstrating value-for-money for expenditure and for accountability and transparency.

EVOLUTION AND DEVELOPMENT OF COST ANALYSIS Documentation of principles governing cost-­ inclusive evaluation methods can be traced to 1772, when Benjamin Franklin postulated in a letter to Joseph Priestley his decision process for determining the most beneficial course of action. Essentially, Franklin opined that it was important to quantify the pros and cons of a decision by placing weights on each pro and con (Gramlich, 1981) using a rational and justifiable perspective deemed valuable to the decision maker. This technique is recognizable as cost-­benefit analysis, one of the earliest forms of cost-­inclusive evaluation.

 71

72 

  Adapting Economic Methods

More formal documented origins of cost-­inclusive evaluation methods are articulated in French economist Jules Dupuit’s 1844 paper “On the Measurement of the Utility of Public Works” (Sassone & Schaffer, 1978). By 1902, the U.S. Army Corps of Engineers began using cost-­inclusive evaluation methods, particularly cost-­benefit analysis, to evaluate federal expenditures on navigation (Zerbe, 2018). In 1936, forms of cost-­inclusive evaluation became formal practice in the U.S. government with the passage of the U.S. Flood Control Act, which “laid down the test that a project is feasible if the benefits to whomsoever they accrue are in excess of the estimated costs” (Chawla, 1990, p. 7). What we now recognize as cost-­ inclusive evaluation became more prevalent and quantitatively sophisticated from the 1950s to 1980s. Notable initiatives during this period included the U.S. Federal Inter-­Agency River Basin Committee on Water Resources report (Gilpin, 1995), the Green Book (see H. M. Treasury, 2018), the use of cost-­benefit analysis for military programs by the U.S. Department of Defense in the 1960s, and the 1969 National Environmental Policy Act mandating cost-­benefit analysis for regulatory programs (Persaud, 2018). In 1981, cost-­benefit analysis gathered additional momentum with the signing of Executive Order 12291 (National Archives, 2020; see Appendix 4.1), making cost-­benefit analysis an official appraisal tool of the U.S. government. In addition to the government of the United States, governments of other developed countries have also endorsed the use of cost-inclusive evaluation. For instance, Australia recommends cost-­benefit analysis for assessment of regulatory proposals (Department of the Prime Minister and Cabinet, 2016). The United Kingdom has guidelines that govern the Central Government’s policy on appraisal and evaluation (H.M. Treasury, 2018). The European Union has legislated guidelines that are binding on its member countries (European Commission, 2014). Canada requires cost-­benefit analysis in regulatory decisions as well (Treasury Board of Canada Secretariat, 2007). Furthermore, large funding institutions have developed their own guidelines and templates to promote cost-­benefit analysis internally and externally. For instance, the United Nations Sustainable Development Group (2016) has developed a free cost-­benefit analysis template for feasibility studies.

TRADITIONAL ECONOMIC FRAMEWORKS Traditional economic frameworks require that costs (i.e., expenditures by a program) and benefits (i.e., income or other monetary outcomes of program activities) be valued using a monetary valuation. Traditional economic frameworks also require that costs and monetary outcomes account for the time value of money by adjusting future cash flows to today’s value

Economic Appraisal Methodologies 

 73

(Pan American Health Organization, n.d.). In determining the suitability of an economic framework for a particular cost study, evaluators need to consider the nature of the evaluand (i.e., what is being evaluated), the type of information needed for decision making, the quality of the data that are available, the time frame and budget for an evaluation, and the robustness of the assumptions that will be used (Persaud, in press). Two techniques are generally employed for valuation of costs and outcomes. Objective techniques are based on “technical and/or physical relationships that can be measured” (World Bank, 1996, p. 42), whereas subjective techniques are based on “behavioral or revealed relationships” (World Bank, 1996, p.  42). Valuation can be computed in several ways. Where market prices are available, this is the preferred approach. When market prices do not exist, other approaches, such as shadow pricing (i.e., a technique used to derive a price for an unpriced good or service), must be employed to impute the value of the good or service. This chapter introduces evaluators to several different ways to analyze cost information. Before proceeding, however, it is important to briefly discuss time preference and discounting, as both issues provide the foundation for economic appraisal methodologies that are based on discounting principles.

TIME PREFERENCE AND DISCOUNTING When programs have a lifespan of more than a year, it is important to adjust costs and benefits for reductions in their values if they are delayed. This is in addition to modifications for inflation or deflation. Put simply, a dollar today has more worth than a dollar in the future. As such, a rational individual would defer payment of $100,000 to the future, rather than pay it now— and would prefer to receive $100,000 today, rather than 10 years from now. Time adjustments to values of costs and benefits is done through discounting (see Figure 4.1). Discounting uses a discount rate r to convert expenditures and monetary outcomes into values that can be used in present-­day decisions. The discount rate represents the minimum return on an investment that we are willing to accept. Common discount rates (rs) include prime or treasury rates in the country in which a program operates. Discount rate choices can profoundly affect economic analyses (Persaud, 2007, 2011) and change the recommendations of a cost-­inclusive evaluation (see Chapter 5, Figure 5.2). As such, evaluators are strongly advised to review the literature on similar programs or projects before selecting a discount rate, to prevent under- or overestimating cash flows when they are converted to today’s value. The wise evaluator also includes justification of the discount rate chosen. Some examine effects of different discount rates on findings and recommendations of cost-­inclusive evaluations.

74 

  Adapting Economic Methods A program is being considered for helping high school dropouts to acquire marketable job skills. The program is expected to train 3,000 participants annually. The only cost to program participants is a one-time registration fee of $100. The initial start-up capital costs for the program are $100,000. The program has a life of 5 years and must be selfsustaining. Using the ingredients method, the estimated revenue and operating costs are projected to be as follows: Revenue

Year 1 $300,000

Year 2 $300,000

Year 3 $300,000

Year 4 $300,000

Year 5 $300,000

Operating Costs

$210,000

$220,000

$240,000

$250,000

$250,000

Employing a discount rate of 5%, the present value (PV) of these cash flows is shown next. The illustration shows how the data would be set up in an Excel spreadsheet. A

B

r=

C

D

E

F

Revenue

PV of Revenue

Costs

PV of Costs

(2)

(3)

(4) = (2) × (3)

(5)

(6) = (2) × (5)

1

0.9524

300,000c

6

2

0.9070

300,000

7

3

0.8638

300,000

8

4

0.8227

300,000

246,810

9

5

1

Year

2

(1)

5%a

3 4 5

10

100,000b

100,000

210,000

200,004

272,100

220,000

199,540

259,140

240,000

207,312

250,000

205,675

0

0.7835 Total

285,720

300,000

235,050

250,000

195,875

$1,500,000

$1,298,820

$1,270,000

$1,108,406

a To find the discount factors shown in Column B, go to Appendix 4.2 at the end of this chapter and locate the column with 5%. Next, look under the Period Column and pick up the discounted factors for Years 1 to 5. For example, Year 1, 5% will show 0.9524, Year 2, 5% 0.9070, and so on. b Initial investment or start-up costs are never discounted as they occur now, so set to Time 0. In this example, investment costs are merged with operational costs since the only investment cost occurred at Time 0 and required no discounting. However, when investment costs occur throughout the program life, it is recommended that they be shown in a separate column and similarly discounted since it is important to keep capital costs separate from operational costs. c Present value computations make the assumption that all cash inflows and outflows take place at year end.

  FIGURE 4.1    Discounting and the present value of money.

Economic Appraisal Methodologies 

 75

As explained in Figure 4.1, discounting calculations use these values: discount rate, estimated costs and benefits of the program, and the estimated duration of the program. The calculations quantify the preference for some, if not most, costs to be delayed during the operation of a program or a participant’s participation in it and for benefits to occur earlier in a program or a participant’s participation in it. Pragmatically, however, the reverse often prevails, as expenditures often must be made to establish a program and recruit participants, and with substantial, often multiyear, delays in monetary outcomes such as increased participant employment income and savings in other services no longer needed by participants following program participation.

NET PRESENT VALUE Net present value (NPV) is the most widely used cost-­analytical methodology in industry. This methodology is based on the principle of discounting illustrated in Figure 4.1 and determines the economic feasibility of a proposed investment by subtracting discounted costs from discounted benefits (see Figure 4.2; note that for ease of reference, all formulas used in

To calculate the net present value (NPV), discount all cash flows to present time (see Figure 4.1). Then substitute your discounted totals into the following formula: NPV = SPV Benefits – [SPV Investment + SPV Costs] = $1,298,820 – ($100,000 + $1,008,406) = $1,298,820 – $1,108,406 = $190,414 Decision Criterion: Accept if NPV ≥ 0 (i.e., NPV must be positive). Interpretation of the NPV: This project is economically feasible. It will earn $190,414. Note. The symbol sigma (S) means “sum of.” With reference to Figure 4.1, we need to sum the discounted cash flows for the project duration, which in this example is 5 years. You can always replace S with the word Total. However, if you are looking for additional examples on the web or in textbooks, you will see the sigma symbol. As noted in Figure 4.1, investment costs were added to operational costs. We, however, recommend that investment costs (i.e., capital costs) be kept separate from operational costs (i.e., recurring costs). When budgets are being cut, operational costs—not capital costs—are cut. If the two are merged, informed decisions cannot be facilitated. For example, a 10% budget cut with the investment costs included will result in $110,841 being cut from the budget, when only $100,841 should be cut. This represents a $10,000 difference. This difference can become very pronounced when costs are in the millions.

  FIGURE 4.2    Net present value.

76

ADAPTING ECONOMIC METHODS

this chapter are summarized at the end of the chapter). NPV is common and critical for evaluation of human services that posit benefits delayed by decades, such as primary education or parent– child training, and costs quite immediate. Prior to the 1970s, many government entities ignored NPV and instead relied exclusively on cost-benefit analysis. Today, however, NPV is “one element of assessment that should be included in virtually all projects” (New Zealand Treasury, 2005, p. 29) and is a standard criterion used in public sector projects to determine economic feasibility (Office of Management and Budget, 1992).

COST-BENEFIT ANALYSIS Cost-benefit analysis combines costs (again, the monetary value of resources consumed by the program) with monetary outcomes (i.e., benefits) produced by the program, such as revenue earned, participant fees, increased earnings for participants following program participation, or decreased use by participants of health or criminal justice services following program participation. It is sometimes called “benefit- cost analysis” and compares programs that may have disparate objectives. This type of analysis can be simple or highly sophisticated (see the section “Cost-Benefit and Benefit/Cost Ratios” in Chapter 9 to learn how simple analyses can be converted into more sophisticated analyses). In its more elaborate form, costs and benefits are considered from the perspectives of multiple stakeholders (Persaud, 2018). Specifically, the more sophisticated Excel provides formulas for computing social cost- benefit analysis seeks to present value, net present value, and internal rate of return, which can “identify a broader range of outcomes considerably simplify the process. However, than just those associated with proevaluators may wish to use other gram objectives  .  .  . it examines the methodologies, such as cost-benefit analysis or the discounted payback period. They will relationship between the investment in therefore need to properly understand the a program and the extent of positive concept of discounting so that they can and negative impacts on the program’s perform these computations. environment” (Stufflebeam & Shinkfield, 2007, p. 179). Accordingly, social cost- benefit analysis includes direct and indirect costs and tangible as well as intangible benefits, including costs and benefits that are internal as well as external to the program, and which are

Economic Appraisal Methodologies 

long as well as short term. Sophisticated cost-­benefit analysis is a complex exercise beyond the scope of this book. Our focus here is on simpler forms of cost-­benefit analysis, which examines costs and benefits from a program perspective. Like NPV, which adjusts benefits and costs for timing before subtracting costs from benefits, the benefit/cost ratio is also calculated using discounted cash flows. In fact, the formula for the benefit/cost ratio is identical to the formula for NPV, with the only difference being that the minus sign in the NPV formula is replaced with a division sign. Thus, if the NPV is already computed, it will take less than a minute to compute the benefit/ cost ratio, and vice versa. Building on the information in Figure 4.1, Figure 4.3 shows the formula for the calculation of the benefit/cost ratio.

To calculate the benefit/cost ratio (BCR), discount all cash flows to present time (see Figure 4.1). Then substitute your discounted totals into the following formula: BCR = SPV of Benefits ÷ (SPV Investment + SPV of Costs) = $1,298,820 ÷ ($100,000 + $1,008,406) = $1,298,820 ÷ $1,108,406 = 1.17 Decision Criterion: Accept if BCR ≥ 1. Interpretation of BCR: A BCR of 1.17 indicates that for every dollar spent on the program, $1.17 is returned.

  FIGURE 4.3    Benefit/cost ratio.

INTERNAL RATE OF RETURN The internal rate of return (IRR) is an economic metric that is used when decision makers wish to estimate the profitability of a proposed initiative. It is the discount rate that makes the present value of a project’s benefits equal to the present value of its costs. The IRR can be found manually (see Figure 4.4) or with spreadsheet software such as Excel. If Excel is being used, subtract the undiscounted costs from the undiscounted revenue by year, as the IRR formula in Excel is programmed to select only one range. For example, with reference to Figure 4.1, Year 1 would become $300,000 – $210,000 = $90,000, Year 2 = $80,000, Year 3 = $60,000, Year 4 = $50,000, and Year 5 = $50,000. To find the IRR manually, the NPV is recalculated using an arbitrary discount rate that is either higher or lower in comparison to

 77

78 

  Adapting Economic Methods • The internal rate of return (IRR) is the r that makes the NPV = 0 (i.e., discounted benefits equal discounted costs). • From Figure 4.2, a discount rate of 5% gives an NPV of $190,414, which is very far from zero. • When the NPV is greater than zero, a higher r value will take the NPV closer to zero. • Since the NPV in Figure 4.2 is far from zero, we need to choose an arbitrary r which is much larger than 5%, since we need to bring the NPV to zero. Use trial and error and choose an arbitrary discount rate (e.g., 60%). Discount the cash flows in Figure 4.1 using this rate and insert your revised totals into the NPV formula in Figure 4.2. It will give an NPV of $14,544. Perform several iterations, since you want to get the NPV closer to zero. • • • • •

First iteration with arbitrary r = 60% gives an NPV of $14,544 Second iteration with arbitrary r = 70% gives an NPV of $2,333 r = 72% gives an NPV of $191 Third iteration with arbitrary r = 73% gives an NPV of ($861) Fourth iteration with arbitrary After a few iterations, you can use an IRR linear interpolation formula to get the actual IRR. However, note that when you are using an interpolation formula, the range between your r values in your two NPVs must be small, or the IRR will be inaccurate. Try to keep the r values within 1% of each other (see below).

IRR Linear Interpolation Formula = Lower Rate + [(Higher Rate – Lower Rate) × (NPV at Lower Rate ÷ NPV   at Lower Rate – NPV at Higher Rate)] = 72 + [(73 – 72) × (191 / 191 – (861)] = 72 + (1 × .18) = 72 + .18 = 72.18% Decision Criterion: Accept if the IRR is ≥ the required rate of return. Interpretation of the IRR: This program returns an IRR that far exceeds the required rate of return of 5%. It is financially viable.

To compute with Excel: • Select Formulas, Financial, IRR. • In the Values box enter the cell reference for the range of the net cash flows (e.g., B1:B6), then select OK. Note that the initial cash flow must be shown as a negative number. • The formula will return a value of 72.18%.

  FIGURE 4.4    Internal rate of return.

Economic Appraisal Methodologies 

 79

the required rate of return r. Note that calculations using the Excel built-in formulas will give slightly different numbers compared with manual calculations, as the manual calculations use four decimal places, whereas the Excel formulas use eight decimal places.

PAYBACK PERIOD AND DISCOUNTED PAYBACK PERIOD The payback period (PP) is the number of years required to recover the original investment (see Figure 4.5). In general, it is preferable to recover costs as quickly as possible, because the money can be used to finance another initiative. Note that the PP is only suitable as a rough screening mechanism, as it does not account for the time value of money. Considering this serious deficiency, the discounted payback period (DPP) is a more appropriate approach for economic analysis of a program (see Figure 4.5).

Payback Period (PP): Equal Annual Net Cash Flows (NCF) Program A costs $100,000 and has annual NCFs of $20,000. PP = Investment Cost ÷ Annual NCFs = $100,000 ÷ $20,000 = 5 years Discounted Payback Period (DPP): Unequal Annual Net Cash Flows (Data from Figure 4.1) Year

0

NCFs (PV Revenue ($100,000) – PV Costs) Cumulative NCFs

1

2

3

4

5

$85,716

$72,560

$51,828

$41,135

$39,175

($100,000) ($14,284)

$58,276

DPP = Year before Full Recovery + Unrecovered Cost Start of Year of Recovery                  NCF during Year of Recovery      = 1 + $14,284/$72,560      = 1.20 years Decision Criterion: Shorter repayment periods are better since less risk is involved. Note that most organizations set their own benchmarks depending on the nature of their business. For example, an organization involved in forestry would require a much longer time for the return on investment, compared to an organization that is involved in tourism. Interpretation of the DPP: This program will take 1 year and approximately 2½ months to recover the initial investment.

  FIGURE 4.5    Payback period and discounted payback period.

80 

  Adapting Economic Methods

COST‑EFFECTIVENESS ANALYSIS Cost-­effectiveness analysis is a useful methodology for examining the costs of programs that are designed to achieve similar or the same outcomes (Persaud, 2007, 2009a). The notion underlying cost-­effectiveness analysis is simple. Programs should be selected based on their ability to achieve results in the most parsimonious manner (Levin et al., 2018). This methodology is particularly useful when program outcomes are complicated to assess in financial terms, when the effort of converting outcomes into a monetary unit of measurement is unduly great, or when excessive controversy will likely arise with the valuation of certain types of outcomes (e.g., reduced mortality, lives saved). In cost-­ effectiveness analysis, costs (i.e., the monetary values of resources consumed by alternative programs) are contrasted with the nonmonetary measure of outcomes produced by the programs. This methodology is the typical measure used in human services. The cumulative monthly cost per depression-­free day might, for example, be calculated for each individual participant in alternative programs and compared statistically1 (e.g., Sava et al., 2009). In practice, however, cost-­effectiveness analysis may be done for a single program if no alternative program is available (e.g., cost per depression-­free day for a given treatment). Cost-­effectiveness analysis can be computed in several ways. When outcomes are identical, the minimum cost approach is useful, and cost-­ effectiveness analysis reduces to a simple comparison of discounted costs. Thus, if three options are being considered and the total discounted costs for Option 1 is $100,000, Option 2 is $90,000, and Option 3 is $105,000, Option 2 would be selected, as it has the lowest cost. The subtly different cost-per-­participant approach (CPPA) (see Figure 4.6) is appropriate when alternative programs have different outcomes, such as different rates of success for participants. Basically, the CPPA divides total program costs for all participants by the number of participants who exceeded the threshold for “success,” such as removal of a pathogen for patients or graduation for college students.

1 As

you develop skills and confidence in performing cost-inclusive evaluation, you can fast-forward to Chapter 9 and review the sections “Complexities and Individual Variability in Indices of Cost-Effectiveness” and “Cost-Effectiveness and Cost/Effectiveness (and Effectiveness/Cost) Ratios” to garner greater insight on how to perform more sophisticated types of cost-effectiveness analyses and why this may be important.

Economic Appraisal Methodologies  Refer to Figure 4.1. Recall that the program is for 5 years, and 3,000 participants will be trained annually. Assume that there is no registration fee for participants. This means that no revenue will be derived. In this scenario, a simple cost-per-participant approach (CPPA) is all that is possible. We still need to discount our costs. CPPA = SPV Investment + SPV Costs ÷ Number of Participants      = ($100,000 + $1,008,406) ÷ (3,000 × 5 years)      = $73.89 Decision Criterion: The CPPA must be positive. If several programs are being compared, choose the one with the lowest cost. Interpretation of CPPA: The cost per participant is $73.89.

  FIGURE 4.6    Cost-per-­participant approach.

COST‑FEASIBILITY ANALYSIS Cost-­feasibility analysis is used during the planning phase. It acts as a screening mechanism to ascertain the feasibility of projects that could be conducted with the available budget. If costs exceed the available budget, it is pointless to conduct further analysis (Levin & McEwan, 2001; Persaud, 2020). Cost-­feasibility analysis is a rather limited type of analysis, as it cannot determine which option is best. It can only identify the alternatives that should be considered.

COST‑UTILITY ANALYSIS Traditional Method Cost-­utility analysis is useful when programs have multiple outcomes that are difficult to assess comprehensively in financial terms. There are several variants, including the traditional method (e.g., Hollands, Pan, & E ­ scueta, 2019; Levin & McEwan, 2001) of assessing alternatives by comparing costs and utility (as perceived by users) to examine which option yields the greatest utility for a given cost (see Table 4.1).

Quality‑Adjusted Life Years A common measure of utility in cost-­inclusive evaluation of health and human service programs is change in quality-­adjusted life years (QALY; Drummond, Sculpher, Claxton, Stoddart, & Torrance, 2015). This approach combines costs with common, widely applicable nonmonetary outcome measures to allow quite disparate programs to be compared in

 81

82 

  Adapting Economic Methods

TABLE 4.1.  Cost-Utility Analysis Traditional Approach Program

A

B

$20,000

$27,000

Probability of increasing mathematics by a grade-level equivalent

.9

.8

Utility of raising mathematics by a grade-level equivalent

8

8

Probability of increasing English by a grade-level equivalent

.9

.8

Utility of raising English by a grade-level equivalent

9

9

Estimated Cost

Expected Utility Program A (.9 × 8) + (.9 × 9)

15.3

Expected Utility Program B (.8 × 8) + (.8 × 9) Cost-Utility Ratio = Cost ÷ Expected Utility

13.6 $1,307

$1,985

Decision Criterion: Choose the program with the lowest cost-utility ratio. Interpretation of Cost-Utility Ratio: Program A provides the lowest cost per utility unit.  

cost-­inclusive evaluations, such as treatments for depression versus treatments for anxiety, or prevention campaigns for cancer versus treatments for heart disease. The number of years of life added by the programs, according to research, is adjusted for quality so that extending a life beset by great pain is valued less than extending life the same or even fewer years with notably less or no pain. Quality-­adjusted life years gained (QALYG) are measured based on participant or provider reports or translated from nonmonetary measures of more specific outcomes, resulting in cost per QALYG, often averaged over individual participants (e.g., Freed, Rohan, & Yates, 2007; Sava et al., 2009). QALYG can be measured using several quickly completed survey instruments that assess the physical health (e.g., the SF-6D, derived from the SF-12 or -36; cf. Glick, Doshi, Sonnad, & Polsky, 2015) or mental health of individuals (e.g., Freed et al., 2007). QALYG attributable to a program usually is assessed as change following the program, often including a follow-­up period of a year or more for problems that typically return at least in part following treatment, such as substance abuse. If any changes in a randomly selected waiting list for treatment are available for comparison, QALYG can be better assessed by statistical analyses that adjust pre–post gains in QALY for the treatment group for similar, or opposite, changes occurring in the waiting list. Net QALYG for the treatment program then are contrasted with the cost of the program in a graph or as a ratio of program cost per QALYG.

Economic Appraisal Methodologies 

 83

Natural variability between participants in actual costs and QALYG, as well as uncertainty in estimated costs and estimated QALYG, can be compared with statistical tests if sample size is sufficient and individual measures of costs and QALYG have been obtained. Simpler, graphic comparisons are possible by first finding the range of QALYG expected from a program, then the range of costs expected for that program, and finally combining QALYG and costs in graphs of QALYG versus costs, as detailed using modified box-and-­whisker diagrams (Hubert & Vandervieren, 2008) in Figure 4.7. Note that confidence intervals can be shown for costs, as well as for QAYLG. In contrast to Drummond et al. (2015), who graph costs on the vertical axis of graphs and outcomes on the horizontal axis, we consider outcomes to be a function of resources consumed (which are valued as costs), thus graphing costs on the horizontal axis and outcomes = f(costs) on the vertical axis (see Figure 4.7), as long practiced in program evaluation (e.g., Siegert & Yates, 1980; Yates, 1978, 1980a). Statistical analyses of cost/QALYG ratios for each participant allow sophisticated comparisons, potentially adjusting for preprogram differences in participants’ characteristics that might moderate program effectiveness. If cost and QALYG data are available as estimates only or are only available at the program level, reasoned comparisons still are possible but require careful validation with actual data on both costs and actual changes in QALY. Decision rules for program funding also can be represented on QALYG × program cost graphs. The common threshold of pro-

Outcome, e.g., Medium Quality-Adjusted Life Years Gained (QALYG)

2.5

2.0

Median Outcome, Median Cost

95% Confidence Intervals

Fundability Threshold $50,000 / 1 QALYG

1.5

1.0 Area of Uncertainty

0.5

0.0 $0

$1,000

$2,000

$3,000

$4,000

$5,000

$50,000

Median Cost per Year per Program Participant

  FIGURE 4.7    Description of variability or uncertainty in QALYG and program costs.

84 

  Adapting Economic Methods

gram costs being below $50,000 per QALYG for the program to continue funding is represented in Figure 4.7 by the dot on the graph at 1 QALYG and $50,000, indicating that programs with at least 1.0 QALYG and costs closer to the zero would be funded. Similar graphs can incorporate box-and-­whisker plots and confidence intervals to represent uncertainty areas, allowing direct visual evaluations of the cost-­utility (or cost-­effectiveness or cost-­benefit) of different programs. For instance, Figure 4.8 illustrates that, relative to the current Index Program A (termed “treatment as usual” [TAU]), alternative Program E has better cost per participant and better outcome for participants than Program A, even after taking into account uncertainties and variability of costs and outcomes for the programs. Given the ranges of probable costs and outcomes for each program, shown by the shaded rectangles in Figure 4.8, Program E is the obvious choice for funding over Program A. Program D might be thought to have had even better outcomes than Program E, but the regions of likely outcomes actually overlap considerably for Programs D and E. If budgetary, religious, or political constraints

Program E

Outcome (QALYG) better than index program

Program D

Index Program (A) cost per participant worse than index program

cost per participant better than index program

Program C

Program B

Outcome (QALYG) worse than index program

  FIGURE 4.8    Multiprogram comparison of cost per QALYG. Note: Regions of most

likely costs and outcomes for each program, based on actual observations with natural variability between participants, are shown by the shaded rectangles of each program.

Economic Appraisal Methodologies 

 85

at a particular site prevent Program E from being implemented, for example, Program D would be preferable to Index Program A because Program D has clearly better outcomes than A, and at similar cost. Note that the shaded regions of Programs A and D overlap on the cost axis, however. Other possibilities are a program that is better in one respect than the index program, but worse in another respect. Relative to Index Program A, a program could have clearly better (lower) cost but also clearly worse outcomes. Program B appears to be such a program, a better option if costs had to be reduced, and if the decrease in outcome were acceptable to decision makers. Also relative to Index Program A, a program could have clearly better outcomes but also clearly worse cost. Program D would be such a program if its cost region shifted more to the right of Figure 4.8 following additional data collection. Both possibilities may need to be considered by decision makers and can be informed by cost-­inclusive evaluation and additional analyses. If, for instance, costs had to be reduced relative to those of the index program, even with worse outcomes, or if outcomes had to be improved even with worse costs, cost-­effectiveness acceptability curves (CEACs) could be generated to help with these challenging decisions. As detailed in Drummond et al. (2015), CEACs could help funders decide whether they were willing to sacrifice outcomes for the lower costs predicted or pay more for the better outcomes expected, according to estimates or actual observed costs and outcomes. CEACs can be generated from cost and outcome data with reasonably accessible computer programs (e.g., Glick et al., 2015; Oxford Population Health, n.d.; TreeAge Pro, 2021). Finally, Figure 4.8 shows that Program C is inferior to Index Program A in both outcomes and costs, with no overlap in outcome or cost ranges. .

Disability‑Adjusted Life Years In contrast to QALY, disability-­adjusted life years (DALYs) are years of life that would have been lost if a program had not been in operation. One speaks of years of life lost (YLL) that were avoided or averted by participation in a program, so DALY would be more accurately expressed as DALYA. A year lived with more disability or illness becomes closer to 1.00 year as the disability or illness increases. DALY are common in large-scale evaluations of health initiatives. Analyses of cost per DALYA proceed similarly to cost per QALYG, including the graphic analyses provided in Figures 4.7 and 4.8. An example of incorporation of DALYA in a comprehensive cost-­inclusive evaluation of cash and food voucher assistance programs for reducing childhood death and stunting is provided by Trenouth et al. (2018).

86 

  Adapting Economic Methods

RETURN ON INVESTMENT Return on investment (ROI) is a metric used to evaluate financial performance. There are many variants of the ROI formula. The typical ratio for social programs relates the monetary net benefits of the program to the monetary resources used by the program (see Figure 4.9).

Using the information in Figure 4.1, the return on investment (ROI) is computed as follows: ROI = [S Undiscounted Net Program Benefits ÷ S Undiscounted Costs] × 100     = [($1,500,000 – $1,270,000)/$1,270,000] × 100     = 18% Decision Criterion: Choose the project with the highest ROI. Interpretation of ROI: This program will generate an 18% return on investment.

  FIGURE 4.9    Return on investment.

SURROGATE MARKET VALUATION METHODOLOGIES The positive outcomes, as well as costs, that environmental resources provide to society are enormous and should be accounted for in social cost-­ benefit analyses. Unfortunately, many environmental and social costs and benefits do not yet have established market prices. Economists have developed a few alternative approaches to impute valuation when market prices are not available to value nonmarketed goods. These techniques are based on revealed preferences techniques, such as hedonic pricing and the travel cost approach and stated preference techniques such as contingent valuation.

Hedonic Pricing This surrogate market valuation methodology “uses the different characteristics of a traded good to estimate the value of a nontraded good” (New Zealand Treasury, 2005, p. 22). It is frequently used in the housing market to value environmental quality variables and determine property values. For instance, noise pollution and air quality have a direct impact on property values. Thus real estate market prices can be used to estimate individuals’ willingness to pay for environmental quality. Specifically, the hedonic price is the differential price paid for a superior environment (location) compared with an undesirable environment.

Economic Appraisal Methodologies 

 87

This methodology is also being increasingly used for the valuation of other types of goods and services. For instance, it has been proposed for the valuation of intangible assets (Cohen, 2009) “to inform health care resource allocations in a more systematic manner” (Basu & Sullivan, 2017, p. 265) and is also being used in cost-­benefit studies for the assessment of major public sector transportation capital projects in developed countries (Faulin, Grasman, Juan, & Hirsch, 2018).

Travel Cost Approach This methodology is particularly useful for assessing the economic value of recreational or natural goods and services. It analyzes actual behavior in a surrogate market by using the estimated costs incurred by people to travel to some public amenity (e.g., recreational parks, fishing resorts, nature trails) as a proxy indicator of society’s willingness to pay for use of the amenity. Visitors to the amenity are surveyed and questioned on issues such as distance traveled and time taken to get to the amenity, trip expenses, number of trips, site characteristics, substitute site(s), and other socioeconomic variables (Dixon, Scura, Carpenter, & Sherman, 1994). Using regression analysis, this indirect approach then estimates “the relationship between visitation rates and travel costs incurred to and from the site” (Fuguitt & Wilcox, 1999, p.  299) and constructs a travel cost demand function for the amenity. All other factors being constant, higher costs to get to the amenity generally translate into less use of the amenity. The travel cost approach is particularly useful for assessing land use alternatives, new site developments, user fees for recreational amenities, and natural resource damage assessment.

Contingent Valuation The contingent valuation approach establishes the monetary value for an environmental amenity by soliciting responses from individuals on the amount that they would be willing to pay for the particular good or service or, alternatively, asking how much compensation they would be willing to accept in lieu of losing the option to purchase the good or service. Willingness to pay and willingness to accept can be estimated using several statistical techniques (Fuguitt & Wilcox, 1999). These methods use scenarios or hypothetical market prices to value improvement or decline in environmental quality, such as convergent direct questioning, the moneyless choice method, trade-off and bidding games, and the priority method (Gilpin, 1995). The technique basically elicits responses from persons via surveys or other indirect ways. It should be noted, however, that although

88 

  Adapting Economic Methods

this methodology is quite insightful, it needs to be interpreted with caution, as it is subject to biases, as well as reliability and validity issues (Gilpin, 1995; Venkatachalam, 2004).

ADVANTAGES AND DISADVANTAGES OF DIFFERENT ECONOMIC APPRAISAL METHODS When there are multiple ways to do something, it is important to properly understand the various methodologies and their respective pros and cons so that you can objectively choose a method that would be appropriate and suited to what is being contemplated. Cost-­inclusive evaluation is similar to research methodology, in which the method is selected based on the types of research questions. Different cost-­analytical methodologies provide very different kinds of cost information. Selecting the right methodology is therefore important and can contribute to the overall success and quality of your cost-­inclusive report. Table 4.2 summarizes the pros and cons of the various economic appraisal methods discussed in this chapter so that those engaged in cost-­inclusive evaluation can be better informed.

SUMMARY This chapter explains and provides examples of the primary methods of cost analysis, presenting unique insights for decision making by program evaluators and administrators. Cost-­analytic methods range from simple cost-per-­program-­participant calculations to sophisticated analyses such as cost-­effectiveness, cost-­benefit, and cost-­utility analyses. For context, the chapter begins with a brief overview of the evolution of cost analysis. Time preference and discounting are covered next, as these are central to cost analysis. Taking into account the time(s) at which costs are incurred and the later time(s) at which monetary outcomes (benefits) are received is crucial in multiyear evaluations, because a dollar spent or received today has more worth than a dollar spent or received in future years. Methodologies illustrated and discussed in this chapter include: 1. NPV, which examines the economic feasibility of a proposed investment over time and the dollar value gained from the investment; 2. cost-­benefit analysis, which indicates the money earned or lost per unit of expenditure;

Economic Appraisal Methodologies 

 89

TABLE 4.2.  Pros and Cons of the Economic Appraisal Methods Discussed in This Chapter Method

Advantages

Disadvantages

Net Present Value

• Not suitable when costs and benefits • Considers time value of money. cannot be assessed in pecuniary terms. • Useful for comparing different types of • Requires selection of a suitable discount investments. rate. • Indicator of wealth maximization. • Ignores nonmonetary factors. • Considers all cash flows. • Provides clear and unambiguous signals to decision makers.

Cost-Benefit Analysis

• Considers time value of money. • Useful for comparing different types of investments. • Useful for judging absolute investment worth. • Helps in ensuring maximum value for dollar is obtained when resources are constrained.

• Not suitable when costs and benefits cannot be assessed in pecuniary terms. • Requires selection of a suitable discount rate. • Ignores wealth maximization. • Results can vary based on assumptions used. • Social cost-benefit analysis is exceedingly complex, complicated, and technical.

• Not suitable when costs and benefits Internal Rate • Considers time value of money. cannot be assessed in pecuniary terms. of Return • Useful for comparing different types of • The timing of net benefits can result investments. in multiple IRRs. Does not distinguish • Intuitively appealing to decision between investment sizes, which can be makers because it is easy to relate to misleading. a bank rate. • May conflict with NPV rankings and invariably sacrifice wealth if decision is based solely on IRR criterion. Payback Period

• Simple to compute. • Simple for decision makers to understand. • Indicator of investment risk. • Indicates time needed to recoup investment.

• Ignores time value of money. • Not suitable when costs and benefits cannot be assessed in pecuniary terms. • Ignores wealth maximization. • Ignores cash flows after investment is recovered. • Rejects long-term investments that could maximize wealth.

Discounted Payback Period

• Considers time value of money. • Indicator of investment risk. • Indicates time needed to recoup investment.

• Not suitable when costs and benefits cannot be assessed in pecuniary terms. • Requires selection of a suitable discount rate. • Ignores cash flows after investment is recovered. • Ignores wealth maximization. • Rejects long-term investments that could maximize wealth. (continued)

90 

  Adapting Economic Methods

TABLE 4.2.  (continued) Method

Advantages

• Useful when it is difficult to monetize Costoutcomes. Effectiveness • Can result in cost savings for an Analysis evaluation, as existing effectiveness data from the evaluation can be used.

Disadvantages • Cannot judge absolute worth of an investment. • Not suitable for assessing worth of investments with different outcomes. • Impossible to determine the net overall monetary value of any investment.

CostFeasibility Analysis

• Investments which exceed budgets are ruled out at the outset. • Relatively simple to perform as benefits do not require quantification.

• Only acts as a rough screening mechanism. • Provides no guidance on which investment is best.

Cost-Utility Analysis Traditional Approach

• Does not have to adhere to stringent data requirements. • Encourages and promotes stakeholder input into decision making. • Incorporates stakeholders’ preferences, which may generate greater stakeholder buy-in.

• Based on subjective preferences. • Cannot judge absolute worth of an investment. • Results cannot be generalized. • Impossible to determine the net overall monetary value of any investment.

Cost per QALY Gained ($/QALYG)

• Outcome may be biased toward health• Allows comparison of programs with focused programs, given that years of different nonmonetary outcomes, life added are the primary criterion, and even in different sectors (e.g., health against older participants, who have vs. education). fewer years of life to live. • Single-program thresholds for funding; • Quality adjustment is fundamentally in certain countries makes cost per subjective and may vary greatly according QALYG attractive for cost-inclusive to interest groups. evaluation of single programs.

• An alternative to cost per QALYG if Cost per individual-level data on outcomes or DALY Averted costs are not available. ($/DALYA) • Often used for prevention programs.

• Could be biased toward health-oriented programs and against older participants. • Not as universal a metric as cost per QALYG. • Relies heavily on proxies shown to be predictive of disability and shortened lives.

Return on Investment

• Ignores time value of money. • Easy to understand. • Can be computed in several ways, which • A good measure of liquidity. will provide different results and thus can • Helpful for achieving goal congruence affect decision making. among different divisions of a company. • Examines only short-term results. • Will be affected by firm’s accounting policies. • Can lead to suboptimal decisions as investment size is ignored.

Surrogate Market Valuation Techniques

• Based on actual or observed behavior. • Does not consider non-users. • Useful for environmental policy making. • Subjective when willingness-to-pay and willingness-to-accept valuations are used.



Economic Appraisal Methodologies 

3. IRR, a methodology used to determine the rate of return earned by a project; 4. PP, which shows the amount of time elapsed before recovery of the original investment in the program; 5. DPP, which shows the amount of time to recoup the original investment, a methodology more robust than the basic PP as it takes account of the time value of money; 6. cost- ­effectiveness analysis, which compares nonmonetary outcomes achieved by a program to costs incurred by the program; 7. cost-­feasibility analysis, which acts as an initial screening mechanism to ascertain the feasibility of viable projects that can be conducted with the organization’s budget; 8. return on investment, which is used to evaluate financial performance; 9. cost- ­utility analysis, which has several variants, including the traditional method, which shows the lowest cost per unit of utility; 10. cost per quality-­adjusted life years (QALY) gained, which assesses the cost of the program divided by total years of life gained as a result of the program after adjustment of those years gained for quality; and 11. cost per disability-­ adjusted life years (DALY) no longer lost (saved), which assesses the cost of the program divided by the sum of (a) avoided losses of years of life given one’s life expectancy at the start of the program plus (b) avoidance of years of life lived with disabilities. This chapter also discusses three surrogate market valuation techniques, namely, (1) hedonic pricing, (2) travel cost approach, and (3) contingent valuation. These approaches are particularly useful for valuing environmental attributes where market prices do not exist. The chapter concludes by examining pros and cons of the methodologies discussed for better understanding of the contributions particular methodologies can make to cost-­inclusive evaluation.

 91

92

ADAPTING ECONOMIC METHODS

DISCUSSION QUESTIONS

(1) This chapter discussed several cost‑analytical methodologies that can be used in cost‑inclusive evaluation. Create a bulleted list of the important issues that should be considered when choosing a cost‑analytical methodology for a cost‑ inclusive evaluation. Now compare your individual responses and engage in a class discussion. (2) Imagine you are a nonprofit organization interested in establishing a juvenile rehabilitation program in your community. Work in pairs and answer questions (a) to (d). (a) Identify four program objectives. (b) Identify four suitable activities for this program. Keep in mind that activities should be aligned with your program objectives. For each activity, identify potential monetary costs and monetize these costs. Use Tables 3.1 and 3.2 as guides to identify and monetize your costs. Share your answers with your classmates. Were your activities and costs similar to other groups? Were your costs realistic? How did you arrive at the costs for your various activities? (c) Now assume that your organization has a limited budget and can only implement two activities. Which two would you select for implementation? Are these two activities the most suited to the program objectives? How did you make this determination? (d) Think about three potential costs savings (e.g., from other human services such as criminal justice services) that the community may gain from this program. In essence, you are making your analyses more sophisticated. Note that we have indicated that complex analyses such as social cost‑ benefit analysis computations are outside the scope of this book. Simulta‑ neously, we have stressed that qualitative discussions can be quite useful. How would such a discussion help your funding application if you were the program administrator? Switch roles and imagine you are an evaluator instead. Do you think that such a discussion would be useful in your evalu‑ ation report? Why or why not? (3) The return on higher education can be an eye‑opener for many persons. (a) Discuss as a class the costs and benefits associated with a high school diploma versus a BSc degree. (b) Next, discuss how you would monetize these costs and benefits. (c) Individually perform present value calculations using a discount rate of 4%. Use Figure 4.1 as a guide. You need to perform two sets of calculations,

Economic Appraisal Methodologies

93

one for a high school diploma and one for a BSc degree. Observe the net differential between the two educational levels. Use the following assumptions: • Perform this analysis using the individual perspective. • Your high school diploma is obtained at age 18, and work commences at age 19. High school education is funded by the government, so tuition costs should not be considered. • Your BSc is obtained at age 22 and work commences at age 23. Your BSc is funded by you. • Retirement age is 65 years for both education levels. Perform your calculations in an Excel spreadsheet. Scrutinize the two sets of computations. Share your observations with the class. Hint: To create discount tables up to Year 65, pick up the Year 10 discount factor shown in Appendix 4.2 and divide by 1.04. For example, Year 11 will be .6756/1.04 = .6496, Year 12 will be .6496/1.04 (i.e., discount factor for Year 11 divided by 1.04) = .6246, Year 13 will be .6246/1.04 (i.e., discount factor for Year 12 divided by 1.04) = .6006. . . . Year 65 will be .0781. Write a formula in Excel to perform these calculations. (d) Calculate the NPV of education with a high school diploma. Use Figure 4.2 as a template. (e) Calculate the NPV of education with a BSc. Use Figure 4.2 as a template. (f) Redo both sets of computations without discounting. Share your observa‑ tions when discounting is used versus when it is not used. (4) Two programs are being considered for replication in another locale. ABC non‑ profit has asked you to analyze both program options and provide a recom‑ mendation. The following data are available: Program A has an initial investment cost of $100,000 and an additional invest‑ ment cost in Year 3 of $20,000. Program B has an initial investment cost of $60,000. Additional information for the two programs is provided below. Year 1

Year 2

Year 3

Year 4

Revenue

$150,000

$150,000

$150,000

$150,000

Operational Costs

$120,000

$130,000

$140,000

$145,000

Revenue

$250,000

$250,000

$250,000

$250,000

Operational Costs

$250,000

$200,000

$240,000

$230,000

Program A

Program B

94

ADAPTING ECONOMIC METHODS

(a) Use Figure 4.1 as a template and perform present value computations using a discount rate of 5%. (b) Use Figure 4.2 as a guide and calculate the NPV for each program. (c) Use Figure 4.3 as a guide and calculate the benefit‑cost ratio for each pro‑ gram. (d) Use Figure 4.4 as a guide and calculate the IRR for each program. (e) Use Figure 4.5 as a guide and calculate the DPP for each program. (f) What other contextual information would be helpful in assisting with a rec‑ ommendation? (g) Provide a recommendation to your client. (h) How can the different analyses computed in (b), (c), (d), and (e) inform deci‑ sion making?

Economic Appraisal Methodologies 

 95

ECONOMIC APPRAISAL FORMULAS Net Present Value (see Figure 4.2)

= SPV Benefits – [SPV Investment + SPV Costs] where

S PV Benefits Investment Costs

= = = = =

sum of present value cash inflows (e.g., service fees) cash outflows (capital costs) cash outflows (operating costs)

Benefit/Cost Ratio (see Figure 4.3)

= SPV of Benefits ÷ (SPV Investment + SPV of Costs) where

S PV Benefits Investment Costs

= = = = =

sum of present value cash inflows (e.g., service fees) cash outflows (capital costs) cash outflows (operating costs)

IRR Linear Interpolation Formula (see Figure 4.4)

= Lower Rate + [(Higher Rate – Lower Rate) × (NPV at Lower Rate ÷ NPV at Lower Rate – NPV at Higher Rate)] where

Lower Rate Higher Rate NPV at Lower Rate NPV at Higher Rate

= = = =

an arbitrary discount rate chosen an arbitrary discount rate chosen actual NPV derived using the lower discount rate actual NPV derived using the higher discount rate

Payback Period: Equal Net Cash Flows (NCF) (see Figure 4.5)

= Investment Cost ÷ Annual NCFs where Investment Cost Annual NCFs

= cash outflows (initial capital costs) = annual cash inflows – annual cash outflows

Discounted Payback Period: Unequal Net Cash Flows (see Figure 4.5)

= Year before Full Recovery + Unrecovered Cost Start of Year of Recovery                NCF during Year of Recovery

96 

  Adapting Economic Methods

Cost-­Effectiveness Analysis: Cost-per-­Participant Approach (see Figure 4.6)

= SPV Investment + SPV Costs ÷ Number of Participants where S = PV = Investment = Costs = Number of Participants =

sum of present value cash outflows (capital costs) cash outflows (operating costs) program participants for period under consideration

Cost-­Utility Ratio (Traditional Method) (see Table 4.1)

= Cost ÷ Expected Utility where Cost Expected Utility

= cash outflows for expenditure = probability × utility

Return on Investment (see Figure 4.9)

= [S Undiscounted Net Program Benefits ÷ S Undiscounted Costs] × 100 where S = sum of Undiscounted Net Program Benefits = cash inflows – cash outflows Undiscounted Costs = cash outflows

Economic Appraisal Methodologies 

APPENDIX 4.1.  Executive Order 12291 In promulgating new regulations, reviewing existing regulations, and developing legislative proposals concerning regulation, all agencies, to the extent permitted by law, shall adhere to the following requirements: (a) Administrative decisions shall be based on adequate information concerning the need for and consequences of proposed government action; (b) Regulatory action shall not be undertaken unless the potential benefits to society for the regulation outweigh the potential costs to society; (c) Regulatory objectives shall be chosen to maximize the net benefits to society; (d) Among alternative approaches to any given regulatory objective, the alternative involving the least net cost to society shall be chosen; and (e) Agencies shall set regulatory priorities with the aim of maximizing the aggregate net benefits to society, taking into account the condition of the particular industries affected by regulations, the condition of the national economy, and other regulatory actions contemplated for the future. (National Archives, 2020, Sec. 2; www.archives.gov/federal-register/codification/executiveorder/12291.html)

 97

98 

2%

0.9706

0.9610

0.9515

0.9420

0.9327

0.9235

0.9143

0.9053

1%

 3

 4

 5

 6

 7

 8

 9

10



0.8203

0.9803

 2

0.8368

0.8535

0.8706

0.8880

0.9057

0.9238

0.9423

0.9612

0.9804

0.9901

 1

2%

1%

Period

3%

0.7441

0.7664

0.7894

0.8131

0.8375

0.8626

0.8885

0.9151

0.9426

0.9709

3%

4%

0.6756

0.7026

0.7307

0.7599

0.7903

0.8219

0.8548

0.8890

0.9246

0.9615

4%

5%

0.6139

0.6446

0.6768

0.7107

0.7462

0.7835

0.8227

0.8638

0.9070

0.9524

5%

6%

0.5584

0.5919

0.6274

0.6651

0.7050

0.7473

0.7921

0.8396

0.8900

0.9434

6%

7%

0.5083

0.5439

0.5820

0.6227

0.6663

0.7130

0.7629

0.8163

0.8734

0.9346

7%

Present Value Interest Factors for $1 Discounted at r Percent for n Periods

APPENDIX 4.2.  Present Value Discount Tables

8%

0.4632

0.5002

0.5403

0.5835

0.6302

0.6806

0.7350

0.7938

0.8573

0.9259

8%

9%

0.4224

0.4604

0.5019

0.5470

0.5963

0.6499

0.7084

0.7722

0.8417

0.9174

9%

10%

0.3855

0.4241

0.4665

0.5132

0.5645

0.6209

0.6830

0.7513

0.8264

0.9091

10%

11%

0.3522

0.3909

0.4339

0.4817

0.5346

0.5935

0.6587

0.7312

0.8116

0.9009

11%

12%

0.3220

0.3606

0.4039

0.4523

0.5066

0.5674

0.6355

0.7118

0.7972

0.8929

12%

13%

0.2946

0.3329

0.3762

0.4251

0.4803

0.5428

0.6133

0.6931

0.7831

0.8850

13%

14%

0.2697

0.3075

0.3506

0.3996

0.4556

0.5194

0.5921

0.6750

0.7695

0.8772

14%

15%

0.2472

0.2843

0.3269

0.3759

0.4323

0.4972

0.5718

0.6575

0.7561

0.8696

15%

C H A P T E R

5

Considerations When Using Economic Appraisal Methods

E

stablishing whether a program is worth its costs is an important question that should be addressed during an evaluation. Irrespective of how sophisticated an evaluation is in establishing outcomes and what caused the outcomes, the essential question for many stakeholders is: How much did it cost to produce those outcomes? This concern has been documented in the evaluation literature for a while (see Chelimsky, 1997; Davidson, 2005; Posavac & Carey, 2003; Rossi et al., 2004; Stake, 2004; Weiss, 1998; Yates, 1979, 1994). However, decades later, the use of cost-­inclusive evaluation remains an ongoing concern, as these evaluations are still being considerably underutilized (Christie & Fleischer, 2010; Persaud, 2018, 2020, 2021). In addition, it has been noted that, when utilized, cost studies are often of poor quality (Madsen, Eddleston, Hansen, & Konradsen, 2017). This deficiency in the use of cost-­inclusive evaluation, as well as the inadequate quality of these evaluations, needs to be rectified. As the global environment continues to struggle with economic recovery efforts brought on by the devastation of the COVID-19 global health pandemic, scarcity of financial resources has become a stark reality. In this severely cash-­ constrained environment, program decision makers need to work hard to convince funders that their programs are worthwhile. Funders, for their part, are very focused on ensuring that funds are utilized to maximize social good at the lowest cost, while delivering high-­quality service. Conducting a cost-­inclusive evaluation using one or more of the economic appraisal methods discussed in Chapter 4 requires consideration of several important issues to protect the integrity of the study. For instance, it is important to understand the types of decision making that can be facilitated with the different types of methodologies, the perspective that the

 99

100 

  Adapting Economic Methods

study should examine, the appropriate discount rate that should be used when present value computations are required, how over- or underestimation of either costs or benefits or both would affect the analyses, and so on. This chapter examines several issues that require consideration when planning a cost-­inclusive evaluation so that the credibility of these evaluations can be improved.

WHICH COST‑ANALYTICAL METHODOLOGY IS BEST? Chapter 4 presented and discussed several economic appraisal methodologies. Evaluators, therefore, have a wide array of methodologies at their disposal. However, for those new to cost-­inclusive evaluation, it may still be somewhat overwhelming to figure out which of the methodologies discussed is most suitable for a particular program and its evaluation needs. As for most methods of conducting most forms of evaluation, there is no right or wrong answer. Each methodology has strengths and weaknesses. Moreover, frequently more than one methodology can be used to support a particular decision. As you would have observed in Chapter 4, several of the methodologies used a fundamental principle: discounting. This means that different types of analyses can usually be performed using the same data. After you have read Chapter 4, cost-­inclusive evaluation should certainly be looking quite doable, much less intimidating, and, hopefully, quite exciting! The most important consideration to bear in mind when thinking about choice of a cost-­analytical methodology is that the methodology chosen must answer the questions that need to be answered. In other words, a methodology should not be chosen merely because of simplicity, popularity, or because the evaluator is familiar or comfortable with the particular methodology. Rather, a methodology should be used because it can provide the information that decision makers have requested or need to make informed decisions. As you think about cost-­inclusive evaluation, consider the following to aid your decision: „ Stakeholder preferences.  If a particular cost-­analytical methodol-

ogy is requested by your client or other critical stakeholders, the requested methodology should ideally be facilitated. However, do not use a methodology unless it makes sense in the context of the decision-­making needs. Often, clients may hear about a particular methodology and ask about it. However, client data may not be sufficient to support use of the methodology. „ Monetization of costs and benefits. Several of the economic appraisal methodologies discussed in Chapter 4 require that both costs

Considerations with Economic Appraisal Methods 

 101

and benefits be expressed in monetary units—in other words, monetized. However, monetizing certain types of benefits may not always be practical, possible, or even acceptable to some stakeholders. For instance, the benefit of higher self-­esteem is challenging to convert to a dollar amount. Likewise, some costs may be difficult and controversial to monetize (e.g., lives saved, cultural practices continued, sacred lands preserved). Difficulties with valuation and measurement may therefore make some methodologies unsuitable. „ Is choice of methodology entirely at the evaluator’s discretion? If the client has not requested a specific cost-­analytical methodology, review the literature on similar programs to see the type of methodology used and adopt the same one if practical and feasible. This will permit comparative analysis between similar programs, which can be quite insightful. Additionally, if various stakeholder groups disagree with the methodology proposed, such a review will provide a rational and logical justification for the chosen methodology. “If it worked for them, it could work for us” could encourage considerable buy-in for stakeholders who might otherwise resist inclusion of costs or monetary outcomes in an evaluation. „ Data quality and availability. We are all familiar with the term garbage in, garbage out (GIGO). Quite often, smaller programs may not have properly maintained accounting records, especially if they are staffed by volunteers. Accurate and reliable data are critical for cost-­ inclusive evaluations. If there are concerns with data quality, certain methodologies should be avoided. Moreover, in this situation it can be essential to run the analyses using not only “best guess” estimates but also low and high estimates of both costs and outcomes to examine the possible range of cost-­ outcome indices—­for example, $/QALYG—that could result if actual costs and actual outcomes assumed different values in the range between low and high estimates. These sensitivity analyses can vary in complexity. One can simply calculate (1) a worst-case scenario using highest costs and worst outcomes, (2) a best-case scenario using lowest costs and best outcomes, and (3) what might be called a “best guesstimate” scenario using average or median costs and outcomes. A more sophisticated version of this approach is provided by probabilistic sensitivity analysis (PSA; cf. Hatswell, Bullement, Briggs, Paulden, & Stevenson, 2018). Given their importance, if costs as well as outcomes are considered in an evaluation, it is important to use some form of sensitivity analysis whenever possible. This issue is discussed in more detail later in this chapter. „ Financial costs and time frame for the study. Most evaluations have stringent budgets and limited time frames. If data are not properly collected and maintained, this will increase an evaluation’s budget and

102

ADAPTING ECONOMIC METHODS

timeline. Some cost-analytical methodologies are quite sophisticated and require very precise data, which may be quite time- consuming and expensive to collect. Constraints on either time or the evaluation budget may therefore exclude certain methodologies completely or reduce the sophistication of the level of analysis that can be performed. In summary, the fundamental concern should not necessarily be that of best cost-analytical methodology but, rather, which cost-analytical methodology is most appropriate for providing answers to the fundamental questions that need answering, bearing in mind cost, time, and data availability. The choice of the best methodology often requires trade-offs among various factors that may frequently be in conflict.

PERSPECTIVE FOR THE STUDY An important requirement for proper planning of a cost-inclusive evaluation is to ascertain at the outset whose perspective the study will examine (Persaud, 2018). The perspective adopted will determine the scope of the study and can produce drastic differences in the analyses (Drummond et al., 2015). A financial analysis is narrow in scope and examines monetary costs and benefits from the perspective of private interests, such as an individual participant, implementing agency, funder, organization, or Cost-Inclusive Evaluation target of services. In contrast, an Financial Economic economic analysis is much wider Analysis Analysis in scope and examines costs and benefits from a societal perspective, such as a country, municipalSocietal Private Interest ity, region, state, or nation (PerPerspective Perspective saud, 2007). Less Common Most Common in Cost-Inclusive in Cost-Inclusive In practice, the use of the Evaluation Evaluation societal perspective often excludes Complex and Narrower the participant perspective, not Sophisticated in Scope because the participant is not part EXAMINES EXAMINES • Items that entail • Opportunity costs of society but because what may monetary inputs/ to society. be valuable to participants (e.g., • All resources, even outputs. • Uses actual prices free resources. increased access to government • Concerned with for inputs/outputs. support or a reduction in stolen third party costs, property) can be viewed from such as externalities caused by program. the societal perspective as mere • Aggregates all “transfer payments” with no net perspectives. change in value to society (e.g.,



¥

Considerations with Economic Appraisal Methods 

 103

transfer of funds from taxpayers to support recipients or transfer of property from owner to thief). For this reason, multiple perspectives can be used in cost-­inclusive evaluation. This usually results in multiple “bottom lines” for costs and outcomes attributable to programs, which can complicate funding and other decisions. Nevertheless, the resources used by programs and program outcomes can indeed differ between perspectives (see French et al., 2018). If the client has requested a particular type of study, then the evaluator’s job is made easy. However, if the client is silent, then the evaluator must use professional judgment to determine which type of study and perspective will be most relevant, useful, and cost-­effective. In many cases, clients may not request any type of cost-­inclusive evaluation, but evaluators may want to conduct one. The evaluation budget that the client can afford and the time frame for an evaluation will often influence the type and complexity of the cost-­inclusive evaluation that can be performed. Often, only rudimentary types of analyses from the private interest perspective may be possible. Notwithstanding this possibility, it is better to have some cost data than none at all, as long as those data are of sufficient quality and are sufficiently representative of the perspectives of all stakeholders affected so that good decisions can be made.

WHEN DIFFERENT STAKEHOLDERS PREFER DIFFERENT COST‑ANALYTICAL FRAMEWORKS Trying to please multiple stakeholders whose interests may be at odds is not unusual in evaluation. Different stakeholders may request different cost-­ analytical frameworks, which may either be totally unsuitable given the data that are available or that may be appropriate but cannot be accommodated with the evaluation budget. Convincing stakeholders about the merits of what might be best and most useful is never an easy task. However, it is important to try to reach a reasonable compromise acceptable to all parties. The following strategies may help with this dilemma: „ Align cost-­ analytical methodology with evaluation questions and decision makers’ needs.  Ensure that you use methodologies that can answer all pertinent evaluation questions on costs and outcomes, so that informed decisions can be made. „ Try to accommodate all requests.  As highlighted in Chapter 4, all cost-­analytical methodologies require cost information in monetary units, such as dollars ($) or pounds (£). Some also require outcomes to be valued in monetary units. When a program, or an individual participant’s partici-

104 

  Adapting Economic Methods

pation in a program, endures for more than a year, resources consumed (costs) need to be discounted, as do resources produced (benefits). Plan your analyses and share those plans readily with stakeholder representatives for feedback, revision, and more feedback, using computer spreadsheets (e.g., Excel) or statistical software (e.g., SPSS) to set up your data and write formulas for all your calculations. It is then a relatively effortless task to provide different types of cost analyses using the same data. Using spreadsheets or data analysis programs to specify analyses also makes it easy to repeat analyses with new, corrected data (which happens often), to replicate analyses if stakeholders so desire, and to potentially transform a one-off cost-­inclusive evaluation into a monitoring and evaluation system that programs can use routinely (e.g., to develop dashboards and quarterly reports). „ Review the literature. Conduct a literature review and see what types of cost analyses were done on similar programs. Reinventing the wheel is overrated, and you can learn from other evaluators’ mistakes (and make new ones!). Have a meeting with critical stakeholders, share this information, and try to reach a consensus on the specific types of cost-­ inclusive evaluation you will perform and report. „ See whether the evaluation budget can be increased. Chat with your client and see whether the evaluation budget can be increased if you need to collect more information to accommodate more sophisticated types of analyses. Alternatively, see whether program staff can gather some information to keep data collection costs reasonable. This may require providing some training, along with provision of specialized templates for data input.

WHEN STAKEHOLDERS WISH TO EXCLUDE SOME COSTS, BENEFITS, EFFECTIVENESS MEASURES, OR COST‑ANALYTICAL FRAMEWORKS It is quite normal to encounter situations in which stakeholders may wish to exclude certain costs or benefits. Some stakeholders may also be averse to the use of certain effectiveness measures or other cost-­analytical methodologies. Five primary reasons may explain why this could occur. 1.  Political agenda. Certain stakeholders may have a political agenda. If this is the case, it will be exceedingly difficult to change their minds about why their requests are unreasonable. For instance, some reductions in program budgets may be designed to inflict harm on disadvantaged persons to perpetuate discriminatory practices and to enlarge inequities in education, health, and well-being. This is not unheard of in education, in health, and

Considerations with Economic Appraisal Methods 

 105

in income support, unfortunately. The most developed countries have long, tragic histories of these practices. 2.  External pressure to cut program budgets. Other budget reductions seem to be made without intent to harm, in response to economic disruptions caused by war, pandemics, climate change, and more. These, too, can have unintended effects that accomplish the same or worse outcomes than those resulting from efforts to intentionally harm. Faced with external pressures to reduce expenditures, or the necessity of cutting some program costs to continue operations, or in response to unexpected increases in other program costs or reduced tax revenues, programs may inadvertently cause less advantaged individuals to suffer more than others. For example, if hours for a public health service clinic are reduced or appointments and tests further delayed, more privileged persons can simply seek services elsewhere, either in the private sector or in another region or country. However, disadvantaged persons seldom have the discretionary monies to spend on private health care. Basic necessities of housing, food, clothing, and utilities often exhaust all available income and more. Thus disadvantaged persons often are disproportionately affected by reductions in public program budgets. In general, then, we recommend exercising great care when planning reductions in service availability, frequency, or quality in response to budget restrictions. Those individuals and peoples who need program services most often have the least flexibility to access alternative services. Reducing or denying services to those most in need will likely exacerbate their circumstances, making their disadvantage ever more severe: a vicious cycle easily started and difficult to slow. In this way, reducing program costs risks providing less to fewer. This can be penny-wise but pound-­foolish, as the saying goes: Reduced program costs and consequently reduced program services can quickly cause minor or moderate health problems to become major health problems. Public monies spent on those major health problems within a few years may well exceed the short-term savings in program costs. It is only safe to reduce program costs when less expensive services are shown to be at least as effective or beneficial as more expensive services. Even in this situation, the public may be better served, and long-term expenditures may be better reduced, by using the same budget to provide services to more people with the less expensive and as-­effective program. 3.  Lack of understanding. Different stakeholders may have limited understanding of the resources → activities → processes → outcomes (RAPO) relationship. Additionally, some stakeholders may not be clear about what the program is supposed to do or may be used to doing things in a particular way and thus be reluctant to consider other alternatives that

106 

  Adapting Economic Methods

can be more efficient in generating outputs. Moreover, various intervening events can affect the relationship between money expended and observed and expected outcomes. These intervening events may not always conform to what is normal or expected (Yates, 1996). 4.  Different interpretations. Different stakeholders naturally interpret some resources, activities, processes, outcomes, and impacts differently. For instance, a healthy lifestyle will mean different things to different people. 5.  Different perspectives. Various stakeholders may have vastly different perspectives about the framework that should be used, even though they have little or no knowledge of cost-­analytical methodologies. Further, they may have heard about a particular methodology and request use of it without realizing that their available data may not be sufficient to facilitate use of that methodology. Trying to please multiple stakeholders and to accommodate multiple requests is normal on an evaluation job. Those commissioning evaluations will frequently make requests that may either be totally unsuitable given the nature and context of the study or that may be appropriate but cannot be accommodated with the budget that they can afford. Convincing clients about the merits of what might be best and most useful will not always be an easy task. The best advice for dealing with a difficult client is to clearly understand why objections are being raised. If the objections are not politically motivated, a collaborative and participatory evaluation approach can often help with resolving many issues. Convene a meeting and discuss why you need to consider certain things. Refer to the literature and what has been done in similar programs. Note that you may also be able to use several different cost-­analytical methodologies at little or no extra cost. For example, you would have noticed that several methodologies are built off the present value computations in Chapter 4. If the concern is about the cost ingredients included in the analysis, then the analysis can be conducted again using different cost ingredients so that multiple perspectives can be accommodated. Likewise, if the concern is about the assumptions used, multiple analyses can also be performed using different assumptions. Again, this would be a relatively easy task, providing that the data were set up in a spreadsheet. This is precisely why sensitivity tests are always recommended. Using sensitivity tests can provide a more comprehensive overall analysis that may be considerably more insightful and informative for decision making and at the same time meet different stakeholders’ requests.

Considerations with Economic Appraisal Methods 

 107

Finally, if various stakeholders’ requests cannot be facilitated because the requests are politically motivated and absolutely no compromise is forthcoming, then the evaluator may have little alternative but to refuse the job. This is never an easy task, but it may be the only choice, as trying to please the client may lead to a tarnished professional reputation.

INAPPROPRIATE INTERPRETATIONS OF FINDINGS FROM DIFFERENT COST‑INCLUSIVE EVALUATION FRAMEWORKS A frequent problem encountered with cost studies is that many well-­ intentioned stakeholder groups who are reviewing the results of the study may have little or no formal knowledge of cost analysis. Put a currency symbol in front of a number and many more people feel they understand it and its implications, when that is not always the case. In addition, some stakeholders may not even understand the true nature of the problems that a program is designed to address. This is common for funders reviewing multiple evaluations of diverse programs. Consequently, major decisions, including budget cuts or program termination, may be based on erroneous understandings of findings of the many types of cost-­inclusive evaluation. Merely reviewing the “bottom line” of cost-­ inclusive findings in isolation from other factors invites bad decisions. Regardless of whether sophisticated or rudimentary cost analyses are being performed, it is critical that program context, program goals, and assumptions used in the analysis be clearly understood before making decisions that do irreversible damage. Consider an agency interested in funding a program to help males ages 17–25 who have dropped out of school. The program is designed to help these young men develop a positive attitude and learn an employable skill so that they can gain permanent employment. Three options are being considered (see Figure 5.1). Each option has three phases and the same program duration. Using the cost per participant shown in Figure 5.1 and concluding that Program A should be implemented because it is cheaper in comparison to Programs B and C can be quite misleading. Although the data indeed suggest that Program A is considerably cheaper compared with the other two alternatives, these figures by themselves are not very useful. A more informative type of analysis would be to determine the cost per graduate. However, although this analysis would certainly provide more insight than the cost per participant, it still does not provide sufficient information to determine the true “worth” (value) of the program, keeping in mind the program goals.

108 

  Adapting Economic Methods • Phase 1: Work-preparedness skills (e.g., punctuality, proper dress, etiquette, courtesy, positive on-the-job attitude) • Phase 2: Marketable job skills (carpentry, masonry, plumbing, refrigeration repairs) • Phase 3: Actual internship opportunities (i.e., on-the-job training) Option

Participants

Cost per Participant

Program A $ 80,000

125

$ 80,000/125 = $  640

Program B $150,000

140

$150,000/140 = $1,071

Program C $200,000

160

$200,000/160 = $1,250

Option

Costs

Costs

Employed

Graduatesa

Cost per Employed Graduate

Program A $ 80,000

 50

$  80,000/  50 = $1,600

Program B $150,000

100

$150,000/100 = $1,500

Program C $200,000

140

$200,000/140 = $1,429

a Graduates

employed with the same employer 2 years after training.

  FIGURE 5.1.    Simple cost-effectiveness analysis.

We need to relate program costs to our program goal—permanent employment. A better type of analysis would therefore be to examine the cost of graduates employed for 2-plus years with the same employer. Or perhaps we might wish to use a shorter time frame, such as 1 year, or a longer time frame, such as 5 years. Using a minimum of 2 years’ permanent employment (see Figure 5.1), we see that the most expensive program is actually the more cost-­effective when the program goal is considered. We could stop our analysis at this point. However, we may also wish to ascertain employers’ overall impressions of the graduates to determine which of the programs is best. Suppose our employers’ survey indicates that employers think that graduates from Program A have a more positive attitude compared with those from Programs B and C, but the craftsman skills obtained in Program C are superior to those obtained in Programs A and B. If the programs did not have to be purchased as a full package, we can examine the cost of implementing Phase 1 of Program A and Phase 2 of Program C so that our rehabilitated young men can be well-­rounded individuals. These analyses should, of course, also consider the natural variability between individuals in program effectiveness. Confidence intervals, standard deviations, and statistical tests of whether apparent differences are reliable and meaningful have become standard in evaluation of not only educational but also psychological services (e.g., Kacmarek, Yates, Nich,

Considerations with Economic Appraisal Methods 

 109

& Kiluk, 2021). Similar consideration and description of possible variability between individuals in number of sessions and in use of other program resources is common in cost-­effectiveness and even cost-­benefit analyses (e.g., McMillan, Gilbody, & Richards, 2010). The aforementioned discussion illustrates that analyses require looking beyond mere numbers. This is the case even when more formal and sophisticated cost-­analytical methodologies are used. For instance, IRR is one methodology that could point to an incorrect decision depending on the timing of cash flows. For example, if two investments, A and B, are being considered, and the IRR for Investment A is 16% compared with 20% for Investment B, Investment B would be selected if only the IRR criterion is used. If the NPV of Investment A is $200,000 compared with $180,000 for Investment B, using the IRR criterion would lead to a suboptimal decision, as $20,000 ($200,000 – $180,000) in wealth would be sacrificed by choosing Investment B over Investment A. However, interpreting the final figures from a particular cost-­ analytical methodology without examining other contextual factors is not the only issue that could lead to inappropriate conclusions. If assumptions are used and the reader is either not aware of the assumptions or decides to simply ignore them, inappropriate conclusions could also be formulated. For example, if studies are being compared, the assumptions used would need to be similar to make an adequate comparison. To avoid problems with inappropriate interpretations when reviewing cost data, individuals reviewing figures should have some knowledge of cost-­analytical procedures. Those involved in decision making also need to look beyond mere figures and understand the program context and goals. Findings from a cost-­inclusive evaluation will only lead to sound, intelligent, and well-­informed decisions when numbers are analyzed in conjunction with other information.

DISCOUNT RATE CHOICES AND THEIR IMPACT ON ANALYSES The concept of time preference, which is intricately related to discount rates, was introduced in Chapter 4. It highlighted that the prime rate and treasury rate in the program country are two rates that are typically used to discount cash flows in financial and economic analyses. As such, evaluators do not have to engage in complex computations to derive a discount rate on their own. It should be noted, however, that while the prime and treasury rates are generally deemed acceptable, many organizations may use a somewhat different discount rate. In these instances, several key issues of context arise:

110 

  Adapting Economic Methods

country (e.g., local economic condition, political stability, risk of catastrophes such as terrorist attacks, natural disasters), current practices, type of program, sector (public vs. private), riskiness of the project, rate used on similar investments, and anticipated life of the program. All are common factors that influence the configuration of a discount rate (Persaud, 2007). As a result of the numerous factors that can influence discount rate choices, debates on this topic remain controversial after many decades. However, experts concur that discounting, which reflects the time value of money, is required to make costs and benefits occurring at differing time periods commensurate (Persaud, 2007, 2011). Specifically, on account of what can be termed pure myopia, the present is preferred to the future, as there is no guarantee of life to enjoy future consumption. Choosing an appropriate discount rate is an issue of considerable practical importance, as discount rate choices can profoundly affect the outcome of an analysis (Kee, 2004). This is because cash flows are exceedingly sensitive to this variable. Low discount rates will make investments appear more feasible because, when lower discount rates are used, the present value is higher (see Figure 5.2). This could result in many socially inefficient programs being cleared for investment. In contrast, high discount rates may affect the attractiveness of some programs, and beneficial programs may not be cleared for acceptability or, worse, an assessment of an ongoing program may suggest that the program is not viable, and a useful program may be terminated (Persaud, 2007, 2011). Moreover, even small variations in the discount rate can significantly bias a cost study, tilting the scales from positive to negative (see Figure 5.2, which also illustrates the concept of sensitivity analysis, discussed later in the chapter). As a result, evaluators need to be very careful in ensuring that discount rates are not deliberately manipulated for political or other motives to make proposed investments appear feasible or infeasible, depending on the motive. For instance, “stakeholders pressing for the implementation of a program will argue for lower rates, and those not in favor of implementation will argue for higher rates” (White et al., 2005, p. 17). In selecting a discount rate, it is generally helpful to conduct a literature review to see the rates that were used for discounting of similar projects. According to the literature, discount rates can span a wide spectrum (developed countries, 3–7%; developing countries, 8–15%; countries with high inflation, >20%). As such, and considering the sensitivity of the discount rate, it is therefore advisable to conduct sensitivity tests to verify the robustness of results. Given that the cost calculations are largely in spreadsheets or computer statistical software, such as R, a range of discount rates can be used to explore how findings might change because of using the low, high, and middle values of the range of discount rates suggested by the evaluation context.

Considerations with Economic Appraisal Methods 

 111

Using the data from Figure 4.1, the NPV will now be calculated using discount rates of 3%, 4%, 5%, 6%, and 7%. NCF means net cash flows. Year 1 $300,000

Revenue

Year 2 $300,000

Year 3 $300,000

Year 4 $300,000

Year 5 $300,000

Costs

$210,000

$220,000

$240,000

$250,000

$250,000

NCFs

$ 90,000

$ 80,000

$ 60,000

$ 50,000

$ 50,000

1

A

B

C

D

E

F

G

Year

NCFs

3%

4%

5%

6%

7%

2 3 4

0

5

1

$90,000

87,381

86,535

85,716

84,906

84,114

6

2

$80,000

75,408

73,968

72,560

71,200

69,872

7

3

$60,000

54,906

53,340

51,828

50,376

48,978

8

4

$50,000

44,425

42,740

41,135

39,605

38,145

9

5

$50,000

43,130

41,095

39,175

37,365

35,650

Total

$305,250

$297,678

$290,414

$283,452

$276,759

NPV 1

$205,250

$197,678

$190,414

$183,452

$176,759

NPV2

$  15,250

$  7,678

$    414

$ (6,548)

$ (13,241)

10

NPV 1:

Note. Initial Investment Cost is $100,000 (see Figure 4.1). is $290,000 instead.

NPV2:

Initial Investment Cost

Observe in this illustration that the NPV is highest when the discount rate is 3%, thus illustrating the point of the attractiveness of lower discount rates. Observe also that when the initial investment cost is $290,000, a one-unit increase in the discount rate from 5% to 6% turns the NPV negative.

  FIGURE 5.2.    Effect of discount rate choices on net present value computations.

MARKET PRICES VERSUS SHADOW PRICES In financial analysis, costs and benefits are measured at market prices, that is, the prices actually paid or received by consumers and producers. However, in economic analysis, market prices often do not provide a true reflection of societal costs and benefits, either because of market distortions (Bruce, 2000) or because no market price exists for the item (New Zealand Treasury, 2005). For example, no market prices exist for costs

112 

  Adapting Economic Methods

and benefits associated with air pollution and improved air quality, respectively, unless one considers health care costs associated with air pollution and the nascent market for carbon credits. In economies with few market distortions (i.e., markets with strong central government controls), market prices generally provide a relatively accurate estimate of the costs of inputs and outputs. As such, these prices should be used as an initial starting point to value costs and benefits, as they are easy to identify and simple to use, and they provide a good reflection of implicit opportunity costs. However, in economies with many market distortions, market prices need to be replaced by shadow prices, which are imputed values of what goods or services would cost in the absence of market distortions (Persaud, 2007; Rossi et al., 2004). In principle, shadow prices should be used to reflect all social opportunity costs, but this is frequently not done in practice because deriving shadow prices can be expensive, difficult, and time-­consuming (Persaud, 2007). Considering this, the preferred practice is “to point out the existing price distortions in the market and, through sensitivity analysis, ensure the accuracy of the evaluation” (Nas, 1996, p. 164). Given that the purpose of this book is to introduce evaluators to cost-­inclusive evaluation, the computation of shadow pricing is beyond the scope of this book, and sensitivity tests are recommended instead.

THE ROLE OF SENSITIVITY ANALYSES IN GAUGING UNCERTAINTY Financial and economic decisions are based on imperfect data and uncertain future events. Specifically, investment decisions are often made on predictions (best-guess estimates) of various inflows and outflows because, in many instances, no previous experience or information is available on certain costs and benefits (Persaud, 2007). Moreover, even when the past can be used as a guide, this does not guarantee certainty. We operate in a dynamic, unpredictable environment. Natural and man-made disasters, pandemics, and financial and other crises create great uncertainty about the future. To gauge uncertainty in financial and economic analyses, evaluators can use a technique called sensitivity analysis, a basically simple but powerful approach useful for addressing two types of questions: (1) Is the program still worthwhile given its worst-case scenario? (2) What is the best option that can be taken to reduce the risk? (New Zealand Treasury, 2005). Sensitivity tests facilitate consideration of a range of plausible alternatives to determine the overall vulnerability of a program to different outcomes (Independent Evaluation Group, 2010). For example, different discount rates are often used to verify the robustness of analyses (see Figure 5.2), and a range of values can be used for costs and outcomes to determine the over-

Considerations with Economic Appraisal Methods

all vulnerability of the program to different assumptions, as well as to the natural variability in costs and outcomes when a program is implemented at different times or in different settings. Sensitivity analysis generally begins by using the original analysis as a baseline to compute high and low estimates. The baseline estimate is usually considered to be the best estimate (most probable), whereas the high and low estimates are intended to reflect the best-case (most optimistic) and worst-case (most pessimistic or conservative) scenarios, respectively. In conducting sensitivity analyses, estimates must have a fair degree of credCommon Parameters Tested ibility or they are worthless (Persaud, in Sensitivity Analyses 2007). Sensitivity analyses can be per• Discount rate variations • Exchange rate variations formed using any common software • Shadow price variations package such as Excel. Once the initial • Long-term time horizons spreadsheet is properly set up, analysts • Delays in program implementation can effortlessly vary assumptions with • Cash flows with extensive estimates • Subjective assessments with different a single keystroke. More complex and weights complete sensitivity analyses allow • Cost drivers careful evaluators to examine effects of varying multiple assumptions simulExample of Possible Parameters taneously, even adjusting findings for to Test Robustness different probability distributions for ROI of Education on Life Earnings each assumption in probabilistic sen• Education costs sitivity analysis (Hatswell et al., 2018). • Commencement salary • Length of time for promotion One of the most important uses • Discount rate of sensitivity tests in financial and economic analyses is to check the Retirement Program • Assessment of the number of robustness of projects by observing beneficiaries how changes in parameter assump• Inflation tions affect the ranking of alternatives. • Wage growth • Discount rate If rankings are invariant with respect to changes in different assumptions, this would indicate that the results are highly robust. However, if rankings change when assumptions change, professional judgment is needed to determine which assumptions appear most credible (Persaud, 2007). A primary means for checking robustness is the switching or crossover value (H.M. Treasury, 2018). This value reduces a program’s NPV to zero (i.e., it is the program’s IRR). Switching values is particularly informative for identifying which variables influence program outcomes the most. Evaluators should note, however, that although sensitivity tests can be considerably useful, these analyses have limitations. Specifically, they are based on the logic of changing the assumptions of one variable while

113

114 

  Adapting Economic Methods

holding the others constant, which may be quite misleading and could lead to erroneous conclusions (Persaud, 2007), as some variables may be correlated. Additionally, “the practice of varying the values of sensitive variables by standard percentages does not necessarily bear any relation to the observed (or likely) variability of the underlying variables” (Belli et al., 2001, p. 148). Finally, it should be noted that sensitivity tests can also be abused if figures are deliberately manipulated to obtain a particular outcome desirable to fulfill some political motive or agenda.

ASSUMPTIONS USED When conducting cost appraisals, assumptions will frequently need to be made about many different issues. Additionally, some cost-­analytical methodologies will require many assumptions compared with others, and some assumptions will have fundamental implications and impacts on the final results. Early consideration of assumptions can significantly aid planning and save significant time when data collection commences. For example, one critical assumption in cost-­benefit analysis is whether the analysis will take an economic or financial perspective. Other assumptions that need to be considered include choice of a discount rate (real1 vs. nominal2), inflation rate, items to include in the analysis, and the extent to which intangible costs and benefits can be reasonably estimated. For a list of general assumptions for economic analyses, refer to the New Zealand Treasury’s Cost Benefit Analysis Primer (2005). It is important to note that, although certain assumptions are common in many cost studies, thus reflecting a certain degree of uncertainty caused by imprecision in the underlying data, the actual modeling of the assumptions reflects the analyst’s judgment and potential biases. Different assumptions can significantly influence the conclusions of a study. For this reason, evaluators are urged to use sensitivity tests to validate the robustness of their assumptions. To reduce subjectivity, analysts should review the literature, government websites, and other important websites (e.g., World Bank, United Nations, UNESCO) to see the types of assumptions that were used in similar studies. Note, however, that assumption(s) should not be adopted and used simply because other studies have used them; rather, they should be 1 Real interest rate: “An interest rate that has been adjusted to remove the effect of expected or actual inflation” (Office of Management and Budget, 1992, p. 19). 2 Nominal

interest rate: “An interest rate that is not adjusted to remove the effects of actual or expected inflation. Market interest rates are generally nominal interest rates” (Office of Management and Budget, 1992, p. 19).

Considerations with Economic Appraisal Methods 

 115

used because the validity and reasonableness of the assumption(s) can be justified in the context of the particular cost study. Additionally, because no universal guidelines exist to govern the assumptions that are used in cost studies, analysts have great latitude in formulating the assumptions that will be used in their particular study. Consequently, it is important that all assumptions used in an appraisal be clearly articulated and documented in the report. Thus H.M. Treasury (2018) recommends that a clear rationale be provided for any assumption(s) used, along with a review of their strengths and weaknesses, so that decision makers can assess the validity and credibility of analyses.

IMPLICATIONS FOR OVER‑ AND UNDERESTIMATION OF COSTS AND OUTCOMES As mentioned in Chapter 3, cost studies require accurate data on costs and outcomes, or the resulting analyses will be flawed and could result in misguided decision making. As such, evaluators need to be exceedingly careful in ensuring that costs and outcomes are neither over- nor underestimated, as this can have serious consequences. To prevent over- or underestimation for a new proposed initiative, evaluators should use the costs and outcomes identification tools described in Chapter 3. When using existing organizational data, however, evaluators need to be alert that over- or underestimations may have already occurred and that the accuracy of these data needs verification. According to the literature, the reasons for over- and underestimation are many and varied. Overestimations can occur when (1) accounting records are maintained by volunteers who are not trained in accounting procedures, often resulting in double counting of costs; (2) possibly fraudulent activities are taking place; (3) better funding might be obtained by overestimating costs; or (4) political motives are at play, with the intent to have a particular program terminated by making it appear to be overly costly. Underestimation is common when: (1) accounting records are not properly maintained, resulting in missed or omitted costs; or (2) political motives are at play to generate favorable cost-­analytical results so that a particular program can continue (Persaud, 2007). Research by Flyvbjerg, Holm, and Buhl (2002), for example, suggests that underestimation of program costs is quite common. Their research reviewed 258 transportation infrastructure projects spanning some 70 years across 20 nations and 5 continents valued at approximately US$90 billion. Costs were found to be underestimated in the vast majority of cases (i.e., 9 out of 10 projects!). The researchers found that, on average, actual costs of projects in transportation infrastructure were approximately 28%

116 

  Adapting Economic Methods

higher than estimated costs. Rail was 45% higher, fixed links (tunnels and bridges) were 34% higher, and roads were 20% higher. Flyvbjerg and colleagues noted that underestimation of project costs appeared to be a global problem but that it was particularly evident in developing countries. Transportation infrastructure projects were not the only projects prone to cost underestimation. In closing, Flyvbjerg and colleagues (2002) indicated that their conclusion was not intended as an attack on public spending; rather, it was simply the case that their data were insufficient to credibly establish whether estimates from the private sector were just as bad as those found for the public sector: Cost underestimation cannot be explained by error and seems to be best explained by strategic misrepresentation, i.e., lying . . . the cost estimates used in public debates, media coverage, and decision making for transportation infrastructure development are highly, systematically, and significantly deceptive. So are the cost benefit analyses into which cost estimates are routinely fed to calculate the viability and ranking of projects. The misrepresentation of costs is likely to lead to the misallocation of scarce resources, which, in turn, will produce losers among those financing and using infrastructure, be they taxpayers or private investors. (Flyvbjerg et al., 2002, p. 290)

The important point is that, whether over- or underestimation is deliberate or the result of genuine errors, or because estimates may have been prepared either by unqualified personnel or volunteers, evaluators need to be especially aware of these issues. Costs and outcomes provided by programs need to be checked thoroughly if the cost-­inclusive evaluation is to be at all worthwhile. When evaluations are based on inaccurate data, whether on costs, outcomes, or program activities, programs may appear either worthwhile or unattractive when the reality is different.

SUMMARY Establishing whether a program is worth its costs and determining the costs of producing specific outcomes are important issues in evaluation. Chapter 4 detailed several methodologies by which program evaluators and administrators evaluate programs. However, these methodologies require consideration of several issues to protect the integrity of the cost-­inclusive evaluation. This chapter examines those issues. First, one chooses a cost-­analytical methodology suited to the questions posed in an evaluation, also keeping in mind the time available for the evaluation, the evaluation budget, and what cost data are actually accessible. The perspective or perspectives of an evaluation can be financial or economic and with an individual or organizational focus (or both). The

Considerations with Economic Appraisal Methods 

 117

financial individual perspective is the most common. The discussion then shifts to how different stakeholder requests for different analytical frameworks can be accommodated. It is pointed out that several types of analyses can usually be done with little additional work when cost data are properly set up using spreadsheet software. Next, stakeholders’ reluctance to share cost data or include certain costs and benefits is considered. The advice provided to deal with this issue is to understand why objection is being raised and initiate discussion with the evaluation funder to try to alleviate fears. The chapter also emphasizes that examining cost data in isolation of contextual factors may be quite inappropriate and misleading. The impact of different discount rates on cost-­analytical computations also is illustrated. Next, the concept of shadow prices, that is, imputed values for goods and services, is introduced. This is followed by discussing and illustrating the role of sensitivity analyses in gauging the robustness of cost-­analysis findings. The importance of documenting and justifying assumptions used in a cost study is then explained. The chapter concludes by highlighting the implications of over- and underestimations of costs and benefits and explores reasons why this might occur and strategies for avoiding both problems.

118

ADAPTING ECONOMIC METHODS

DISCUSSION QUESTIONS

(1) You are a novice evaluator, and this is your first solo evaluation. You have read our book Cost-Inclusive Evaluation, and you are excited to conduct your first cost‑inclusive evaluation. How would you determine which cost‑analytical methodology would be best for your evaluation? (2) Following from Question 1, the methodology has now been selected. However, your client is reluctant to release certain cost data as he or she feels that those costs should not be considered in the analysis. What are your options at this point? (3) Discount rates choices can greatly influence your analytical computations. Engage in sensitivity analysis to observe how discount rates can affect your computations. Perform sensitivity analyses using discount rates of 2%, 5%, and 7%. Use Figure 5.2 as a guide to set up your analyses. Use the following net cash flows in your analyses: Year 1

Year 2

Year 3

Year 4

Year 5

Year 6

$75,000

$94,000

$93,000

$99,000

$99,000

$99,000

(4) Over‑ and underestimations of cash inflows or outflows can greatly affect your analyses. This is especially true if the magnitude of the over‑ or underestima‑ tion is large. Assume that your costs for Year 1 are $1,000,000, and revenue from program fees is $1,200,000. Perform calculations using a discount rate of 5% for each scenario in (a) to (d). (a) How would an overestimation of costs by $120,000 affect your computa‑ tions if your cash inflows are correct? How would this affect decision mak‑ ing? (b) How would an underestimation of costs by $120,000 affect your computa‑ tions if your cash inflows are correct? How would this affect decision mak‑ ing? (c) How would overestimation of revenue by $100,000 affect your computa‑ tions if your cash outflows are correct? How would this affect decision making? (d) How would underestimation of revenue by $100,000 affect your compu‑ tations if your cash outflows are correct? How would this affect decision making?

Considerations with Economic Appraisal Methods

(e) In Chapter 3, it was stressed that duplications or omissions, or both, of costs or benefits can provide inaccurate analyses. This can result in termi‑ nation of a beneficial program or continuation of an ineffective program. Based on your computations in (a) to (d), do you see why it is important for both program administrators and program evaluators to be diligent when compiling and using cost data? Discuss as a class.

119

PA R T I I I

Adapting Concepts and Tools from Accounting to Improve Cost‑Inclusive Evaluation

C H A P T E R

6

Financial Accounting Concepts and Tools

A

s stated earlier, the use of cost-­inclusive evaluation is still quite limited. Moreover, the evaluation literature contains very sparse discussions on this topic and is largely focused on traditional evaluation methods such as cost-­effectiveness analysis, cost-­benefit analysis, and cost-­utility analysis. Our book encourages evaluators and others involved in cost analyses to step out of their comfort zone and embrace new methodologies and approaches to cost-­inclusive evaluation. To this end, Chapter 6 of this section introduces readers to financial accounting concepts and tools and Chapter 7 to cost and management accounting concepts and tools. These are both fundamental to the evaluation of costs and offer unique and innovative ways of turning what many view as a summative evaluation into one with profound formative potential. By understanding costs in terms of program operations, managers and administrators gain insights that are possible only by applying concepts long honed in accounting. Before proceeding, it is important to explicate why we think that financial accounting can help both program evaluators and program administrators. Accounting can be conceptualized as the first type of cost-­inclusive evaluation, initiated centuries ago and standardized, in its newest form, for years. Accounting is both different from and in some ways a subset of what “evaluation” means to most people. Financial accounting is a branch of accounting that is concerned with the financial transactions of an organization. It is an instrument of control that collects, records, summarizes, reports, and analyzes the monetary transactions of an entity (McKinney, 2004; Persaud, 2009b). Specifically, the process records and summarizes income and expenditures, and accumulated assets and liabilities, of an organization (Weygandt, Kieso, & Kimmel, 2016). For starters, financial accounting facilitates preparation of the organization’s annual financial statements. These are the income statement

 123

124 

  Concepts and Tools from Accounting

(referred to as a statement of activities in a nonprofit entity), which answers the what happened question, and the balance sheet (referred to as a statement of financial position in a nonprofit entity), which answers the where is the organization now question. The preparation of financial statements is mandatory for all organizations, including nonprofit entities.1 Financial statements must be produced and filed annually to comply with income tax reporting laws in every country. Financial accounting is governed by a body of standardized rules and conventions. These principles are called generally accepted accounting principles (GAAP) and were developed by the accounting profession with input from some major partners. GAAP must be employed by those preparing financial statements to record and report financial information (­Garrison et al., 2017; McKinney, 2004; Weygandt et al., 2016). Financial statements report the organization’s financial performance for a given year. These statements are prepared by external certified public accountants who attest to the fairness and accuracy of the information presented in the reports. These reports are of considerable interest to many external stakeholders, including banks, funders, creditors, investors, shareholders, suppliers, analysts, the press, and government tax agencies (Persaud, 2009b). Given the wealth of information contained in financial statements, program administrators should more fully understand and be able to interpret these statements. This understanding can help with program forecasting and strategic planning. For their part, program evaluators can also benefit from understanding financial statements, which show fundamental program operations and program financial health by itemizing costs and revenues and by making explicit monetary assets and liabilities. Reporting the financial health of a program can strengthen applications for funding and enhance evaluation reports when it can be shown that a program is sufficiently liquid to pay its debt, can support itself as a going concern, spends less than it earns, can efficiently raise funds, and so on. Because financial statements are prepared by qualified accountants according to standardized methods, information from this source is extremely credible; reliability and validity of cost data in financial statements is high. Knowing how to interpret financial statements can also save time that otherwise might be spent delving into program accounting 1 For

example, in the United States “tax-­exempt organizations, nonexempt charitable trusts, and section 527 political organizations” (Internal Revenue Service, n.d., para. 1) are all required to fill out Form 990—Return of Organization Exempt From Income Tax (Internal Revenue Service, 2021) to comply with IRS section 6033. Part XII of Form 990 deals with Financial Statements and Reporting. These records are generally open to public scrutiny.

Financial Accounting 

 125

records, which can be daunting, if they are even provided by sometimes reluctant administrators. Financial statements can, if understood correctly, provide the information needed to perform the various types of cost-­ inclusive evaluations discussed in Chapter 4. Understandably, stepping outside of one’s comfort zone is not always easy. The cost-­analytical methodologies discussed in this section can invoke mixed reactions, with some embracing these methodologies and others being harshly critical. For example, one critic indicated that the “discussion and promotion of such methodologies have nothing to do with evaluation.” Moreover, financial statements and financial statement analysis is not a necessary skill for most program evaluators, nor . . . is break-even analysis, which is covered in Chapter 7. We disagree with our critics who feel this way. The first author (Persaud), who is an evaluator and a CPA, is convinced that the methodologies discussed in Chapters 6 and 7 can completely revolutionize the way in which cost-­inclusive evaluations are performed. The competencies for evaluators are many, and evaluators can benefit from expanding their repertoire of tools to meet program and evaluation funder needs. Learning, for evaluators and most professions, cannot be stagnant in its scope or reach. Rather, our education is a continuing, evolving process—­one that responds to the dynamics in our environment to allow us to flourish. Furthermore, scholarship continuously evolves with experience and learning. All professionals, program evaluators being no exception, should therefore embrace new ways of doing things if this improves the workings of the organization. In summary, financial accounting is a normal and routine activity of every organization, irrespective of the nature of its business (Persaud, 2009b). However, very few program administrators and even fewer program evaluators understand the principles that govern financial accounting and the terminology of this discipline, and only a handful can interpret financial statements. This chapter introduces readers to basic financial accounting concepts and tools, as these tools can help to make program operations more efficient, enhance decision making, and convince funders of the soundness of the organization.

UNDERSTANDING ACCOUNTING RECORDS TO EXTRACT RELEVANT DATA Accounting records consist of all records that pertain to an organization’s financial transactions during the course of normal business operations. These records can take many different forms, such as checks that are received or paid; invoices pertaining to both purchases and sales; bank statements and other banking documents; bonds and investments; contracts

126 

  Concepts and Tools from Accounting

and orders; documentation in journals, general ledgers, and trial balances in either hard copy (i.e., books) or soft copy (i.e., accounting software) formats; and records of assets and liabilities on organization books. Such records are a critical component for the preparation of proper and accurate financial statements, which are mandatory for tax reporting purposes and for preparation of Form 990, which is used in the United States. They are also important when financial reviews, compliance audits, or tax audits are conducted, as the original records are reviewed during these types of activities. In general, most organizations collect a vast amount of accounting records in a very short time. As such, these records need to be properly filed and catalogued, or the process can quickly become chaotic and unmanageable. As a rule, records should be batched into financial years, that is, a 12-month period. This period will be different for every entity and commences on the date of incorporation of the entity. For example, if an entity was incorporated on February 1, its financial year will run from February 1 of the year of incorporation (e.g., 2022) to January 31 of the following year (e.g., 2023). Cataloguing by financial year is particularly important, as national income tax systems generally require that accounting records be maintained for between 5 and 7 years, depending on jurisdiction. 2 Accounting records maintained for this period, however, can become quite bulky and hard to manage unless a proper system for storage and filing is established up front. Unfortunately, this is often not done, particularly in smaller organizations, because these organizations cannot afford a dedicated accounting department. As a result, those tasked with preparing the financial statements generally have to engage in much preparatory work to catalogue records prior to creation of actual financial statements. This extra work is quite tedious and can cost the organization a considerable sum. It can also delay the preparation of the financial statements, which could have other repercussions, such as the incurrence of penalties for late filing for income tax reporting. Furthermore, when financial accounting records are not maintained in a systematic manner, it becomes impossible to use the data for internal decision making that is continuous and ongoing. Not having financial data readily available can be a serious impediment to the success of any organization. In the same manner in which an individual needs to be able to manage income and expenditure, organizations need to understand their cash flows so that profitability can be maximized, in the case of private sector entities, or bankruptcy can be prevented, in the case of nonprofit entities. 2 The

Securities and Exchange Commission in the United States requires that audited financial statements and their accompanying records be maintained for at least 7 years in the event of a tax audit.

Financial Accounting 

 127

REASONS FOR KEEPING GOOD RECORDS AND USING A SYSTEMATIC PROCESS FOR RECORD KEEPING • GARBAGE IN, GARBAGE OUT—Information can only aid decision making if proper systems

are in place to generate accurate, reliable, and timely information.

• Business success depends on knowing your organization's cash flows and your assets and

liabilities. Good accounting records help you to strategically plan and position your organization. • When faced with a dispute, physical documented evidence is important to challenge the dispute. For example, if you purchased 100,000 COVID-19 test kits from China but only received 90,000, your customs invoice will document the quantity received. If you do not maintain this document, you will have no recourse in trying to get a credit for your next order or the missing test kits. It will also create confusion with your inventory count on your financial statements because your records will show that you purchased 100,000 test kits. • Financial institutions usually request audited financial statements when loans are being considered. • Maintaining good accounting records can save you money. If your external auditor must engage in preparatory work such as sorting and batching bills, this will cost you money. It can also delay submission of your financial statements to the tax authorities, which will carry a monetary penalty for late submission. • Accurate records are needed for income tax reporting purposes, even if you are a nonprofit entity. Good record keeping is a mandatory legal requirement. Audits can be conducted many years later, so records must be properly maintained and archived. Failure to supply requested documents to the tax authorities can result in a full-fledged audit of your operations, which may be quite intimidating.

As a rule, recording accounting information can take one of two forms. The single-­entry form is simple in practice and records only a single transaction, which means that there is no link to the other side of the transaction. It is essentially concerned with cash receipts and cash disbursements and uses a single cash book to document these transactions. Several serious limitations are associated with this method of recording. Specifically, it is not suited to the preparation of financial statements, it lacks checks and balances, it does not record assets, it increases the risk of errors, and it has a high potential for creating inaccurate and incomplete records. To negate these limitations, the double-­entry form of accounting records two entries for each transaction—­one debit and one credit (see Table 6.1). This method traces the movement of transactions, thus minimizing mistakes, while making it easier to detect potential fraud. The latter approach is recommended for all business operations, as all businesses must prepare financial statements. The remainder of this section discusses common financial accounting records that all organizations produce—­some of which can be useful in cost-­inclusive evaluations.

128 

  Concepts and Tools from Accounting

General Journal A general journal is the book of original entry (or an electronic spreadsheet) that details all of the organization’s financial transactions in chronological order of occurrence (McKinney, 2004; Needles, Powers, & Crosson, 2011). It is a double-­entry process that shows the debit and credit effects of each transaction on specific accounts. The process of entering transactions into a journal is referred to as journalizing. Journalizing involves making a separate entry for each transaction, using either a simple entry (two accounts only) with one debit and a corresponding credit or a compound entry (more than two accounts; Weygandt et al., 2016). An example of both a simple entry and a compound entry is provided in Table 6.1. The general journal carries five columns of data: date, account title and description, reference, debit, and credit. The general journal facilitates the preparation of the general ledger. TABLE 6.1.  Simple versus Compound Journal Entry GENERAL JOURNAL ILLUSTRATION Date

Account Title/Description

Ref

Debit

Credit

Simple Entry (two accounts only) Jan 02, 202X

Furniture and Equipment

  200

Cash

  200

Compound Entry (several accounts) Jan 03, 202X

Furniture and Equipment

  400

Office Supplies

  600

Rent

2,000

Cash

3,000 Double System: DoubleEntry Entry System: Debits Credits Debits = Credits



General Ledger The general ledger is essentially the second book of entry in the financial accounting system. It is an extension of the general journal and provides a summary grouping of the organization’s accounts (McKinney, 2004) in one location. It comprises five categories: revenue, expenses, assets, liabilities, and equity. Each category contains several different accounts (Weygandt et al., 2016). For example, the category of assets would contain all the asset accounts set up by the organization, such as cash, accounts receivable, inventory, equipment, furniture, and so on.

Financial Accounting 

 129

All entries from the general journal are posted into the general ledger. The accounts in the general journal are referred to as T-­Accounts. The left side of a T-­Account is the debit side, and the right side is the credit side. To record an increase in an asset or expense account, the T-­Account is debited; to record a decrease, the T-­Account is credited. If you are confused, think of it this way: If you purchase a cellular phone for $400 cash, you now have an asset (a phone) worth $400. However, if you paid cash, you would have $400 less in your wallet. You have increased one asset account but decreased another asset account by the same amount (simple double entry). In contrast, an increase to a liability, capital, or revenue account is reflected by a credit to the account, and a decrease is reflected by a debit to the account (McKinney, 2004). For instance, if instead you purchased your phone using an interest-­free loan from your family, you would have an account payable (a liability). This account would be credited with $400 on the purchase date. As you make monthly repayments of, say, $100, you will debit the accounts payable to reduce the balance and credit your cash to decrease the amount of cash in your wallet. Each line item from the general journal is transferred to the general ledger using the identical account name and identical debit or credit amount. For instance, the purchase of a computer would be recorded as a debit in a T-­Account called Furniture and Equipment, with the corresponding credit being recorded in a T-­Account called Cash (see Table 6.2). After all transactions are transferred from the general journal, the individual T-­Accounts are then balanced. The balance from each T-­Account is then transferred to the third book of entry, which is known as the trial balance. TABLE 6.2.  Entry to Record Computer Purchase GENERAL LEDGER ILLUSTRATION Furniture and Equipment Debit

Credit

Cash Debit

Jan 02, 202X  400

Credit Jan 02, 202X  400

Double System: DoubleEntry Entry System: Debits Credits Debits = Credits  

Trial Balance The trial balance is the third book of entry in the recording process. It is a list of the organization’s accounts. This report is used for preparing the organization’s financial statements. It provides a summation of debits and credits at a specific time. Like the general journal, the debit column is listed before the credit column (see Table 6.3). Both columns should balance

130 

  Concepts and Tools from Accounting TABLE 6.3.  Trial Balance ORGANIZATION NAME TRIAL BALANCE DECEMBER 31, 202X

List All General Ledger Accounts and Balances Total (Debits = Credits)

Debit

Credit

xxxx

xxxx



when summed (Weygandt et al., 2016). Note that it is possible for a trial balance to balance but still be incorrect. For example, if entries were never recorded or were recorded incorrectly, the trial balance will not detect this. Thus, if $1,000 is entered as $10,000 in both accounts to facilitate double-­ entry accounting, the trial balance will still balance. On the other hand, if the trial balance does not balance, this is an indication that an error has occurred either during journalizing and posting or as a result of fraud in the organization.

Financial Statements The trial balance feeds into the preparation of the financial statements, which comprise three primary documents (see Figure 6.1), namely, the income statement, statement of cash flows, and balance sheet (Albrecht, Stice, Stice, & Swain, 2011). The financial statements are accompanied by

Notes to Financial Statements (explanations and clarity) Ending Balance Sheet (where you are) Income Statement and Statement of Cash Flows (what happened) Beginning Balance Sheet (where you were)

  FIGURE 6.1    Documents comprising the financial statements.

Financial Accounting 

 131

a document called notes to the financial statements. The income statement and balance sheet show the financial performance of the organization and should be audited to comply with income tax reporting regulations in the organization’s country. A discussion of these documents follows.

INCOME STATEMENT: IMPORTANCE, TERMINOLOGY, AND INTERPRETATION As previously mentioned, the income statement is one of three primary financial statements required under GAAP. This statement provides a summary of the revenues and expenses (see Table 6.4) for the entity’s reporting period and shows the resultant profit or loss for the same time period (Weygandt et al., 2010, 2016). It answers the question: what happened? A typical income statement shows two income figures: gross income and net income (see Table 6.4). The level of detail that is provided on an income statement will vary according to the nature of the organization’s business (Albrecht et al., 2011; Garrison et al., 2017). The terminology used may also vary. For example, as previously mentioned, in a nonprofit organization, the income statement is called a statement of activities.

Name of Organization Income Statement Year Ended December 31, 202X Revenue

xxxx ①

Less Cost of Goods Sold (for profitability companies) or Cost of Services (for service providers)

xxx ②

Gross Income

xxx ③ = ① – ②

Less Selling & Administrative Expenses (Itemize Individually) Furniture and Equipment Rent Salaries Utilities

xx xx xx xx

Total Expenses

xxx ④

Net Income

xxx ⑤ = ③ – ④



Presentation for a Nonprofit Statement of Activities: Revenues/Gains/Other Support – Expenses = Change in Net Assets

TABLE 6.4.  Presentation—Income Statement

132 

  Concepts and Tools from Accounting

STATEMENT OF CASH FLOWS: IMPORTANCE, TERMINOLOGY, AND INTERPRETATION Like the income statement, the statement of cash flows is also helpful for answering the question: what happened? This statement summarizes where cash originated from and how it was used (Albrecht et al., 2011; Garrison et al., 2017). It is concerned with measuring cash generation and how it is used to fund operations and investments and pay debts. The statement of cash flows comprises three activities: operating, investing, and financing (Albrecht et al., 2011; see Table 6.5). The operating activities category is considered the most important of the three categories, as it provides a measure of the entity’s ability to sustain itself as an ongoing concern. In most cost assessments of nonprofits, operating activities is frequently the exclusive focus, as there may be no investing or financing activities.

Operating Activities are concerned with everyday activities that are a normal and routine part of operating a business. Specifically, this encompasses all activities that lead to the incurrence of expenses or the generation of revenue that directly affects net income (Albrecht et al., 2011; Weygandt et al., 2010). These activities are derived from INCOME STATEMENT ITEMS. Investing Activities examine the effects of transactions from disposals and acquisitions of investments and non-current assets (e.g., sale of building, purchase of equipment), as well as repayments and loans (Weygandt et al., 2010). These activities are derived from CHANGES IN INVESTMENTS/NON-CURRENT ASSETS. Financing Activities are concerned with all transactions that affect long-term liability and equity. Included in this category are activities that lead to cash acquisitions (e.g., issuance of bonds, sale of stock), and repayments of those debts, along with dividend payments (Albrecht et al., 2011; Weygandt et al., 2010). These activities are derived from CHANGES IN NON-CURRENT LIABILITIES/EQUITY.

Financial Accounting 

 133

TABLE 6.5.  Presentation—Statement of Cash Flows Name of Organization Statement of Cash Flows Period Ended December 31, 202X Cash Flows from Operating Activities (i.e., Income Statement Items) Inflows (Itemize Individually) Revenue from Goods/Services Dividends/Interest Received

xx xx

xx ①

Outflows (Itemize Individually) Operating Expenses Supplies for Inventory Interest Payments Taxes

xx xx xx xx

xx ② xxx ③ = ① – ②

Net Cash provided (used) Cash Flows from Investing Activities (i.e., Changes in Investments/Non-Current Assets) Inflows (Itemize Individually) Sale of Investments Sale of any Non-Current Assets Loan Principal from Other Organizations

xx xx xx

xx ④

Outflows (Itemize Individually) Purchase of Investments Purchase of Non-Current Assets Loans to Other Organizations

xx xx xx

xx ⑤ xxx ⑥ = ④ – ⑤

Net Cash provided (used) Cash Flows from Financing Activities (i.e., Changes in Non-Current Liabilities/Equity) Inflows (Itemize Individually) Long-Term Debt (e.g., Bonds/Notes) Common Stock Outflows (Itemize Individually) Dividends Long-Term Debt

xxx xxx

xxx ⑦

xx xx

xx ⑧

Net Cash provided (used)

xxx ⑨ = ⑦ – ⑧

Net Increase (Decrease) in Cash + Cash at Start of Period = Cash at End of Period

xxx ⑩ = ③ + ⑥ + ⑨ xxx ⑪ xxx ⑫ = ⑩ + ⑪

Non-Investing and Financing Activities (Itemize Individually)

xxx ⑬



134 

  Concepts and Tools from Accounting

BALANCE SHEET: IMPORTANCE, TERMINOLOGY, AND INTERPRETATION A balance sheet is a statement that shows the financial position of an entity at a precise point in time (Albrecht et al., 2011; McKinney, 2004). Specifically, an entity’s fiscal year spans 365 days from the date of incorporation. Thus, if incorporated on April 1, the end of the fiscal year will be March 31 IMPORTANCE OF BALANCE SHEET/ of the following year. Given that the STATEMENT OF FINANCIAL POSITION balance sheet is presented at a specific • Shows financial position of entity at a date in time, it can be considered as a specific point in time. • Facilitates strategic planning when snapshot or picture of the entity’s multiple years are compared. financial worth at that date. As previ• Shows liquidity of operations. ously mentioned, in nonprofit entities, this statement is referred to as a statement of financial position rather than a balance sheet. Regardless, it essentially follows the same format and principles for preparation. A balance sheet for a for-­profit entity comprises three distinctive classifications or groups: assets (what the entity owns), liabilities (what the entity owes), and capital or equity (what is invested in the entity; Albrecht et al., 2011). In a nonprofit organization, equity is replaced with net assets and comprises unrestricted assets that can be used for any activity, plus restricted assets that can be used only for a subset of operating activities with donor instructions. Depending on the nature of the organization, the information in a balance sheet or statement of financial position may be of great interest to either a diverse range or a narrower group of stakeholders. For example, the balance sheet of a public company (i.e., a company that trades on the stock market) would be of interest to many stakeholders, including current and potential investors, banks, creditors, senior management of the company, competitors, and government tax agencies. In contrast, the statement of financial position of a nonprofit organization would mainly be of interest to potential funders and government tax agencies. A balance sheet is based on the basic formula of Assets = Liabilities + Equity (Albrecht et al., 2011; McKinney, 2004). In the case of the statement of financial position, the basic formula is Assets (With and Without Donor Restrictions) = Liabilities + Net Assets (With and Without Donor Restrictions). The aforementioned formula may admittedly be quite confusing for a novice not familiar with accounting. This is because persons intuitively conceptualize liabilities as a negative. For example, if you have $100,000

Financial Accounting 

 135

in the bank (an asset) and you also owe $25,000 to someone (a liability), your equity or net worth is $75,000 (i.e., $100,000 – $25,000). The confusion for many readers occurs because liabilities are shown on the balance sheet or statement of financial position as a positive number. This is done because a balance sheet or statement of financial position must balance. Thus, if liabilities are shown with a negative sign, the balance sheet would not balance. Although the items on a balance sheet or statement of financial position may differ depending on the nature of the entity or industry, the below-­mentioned items are generally common. Balance sheets or statements of financial position can be presented in two formats. The traditional approach shows Assets on the left side of the statement and Liabilities plus Equity (or Liabilities plus Net Assets in a nonprofit) on the right side of the statement (see Figure 6.2). The more contemporary approach shows Assets at the top of the report and Liabilities plus Equity (or Liabilities plus Net Assets in the case of a nonprofit) at the bottom of the statement (see Figure 6.3 on p. 138). Table 6.6 shows a typical balance sheet format for a profitability company and Table 6.7 shows a typical statement of financial position format for a nonprofit entity. Both illustrations are presented using contemporary rather than traditional formats.

BALANCE SHEET FORMULA TRADITIONAL APPROACH—FOR-PROFITS Total Assets

Total Liabilities

Total Equity

Current Assets

Current Liabilities

Owners Capital

Fixed Assets or Non-Current Assets

Long-Term Liabilities or Non-Current Liabilities

Retained Earnings

  FIGURE 6.2    Traditional balance sheet presentation.

$

136 

  Concepts and Tools from Accounting TABLE 6.6.  Presentation—Contemporary Balance Sheet for a Profitability Organization

ASSETS Current Assets Cash Accounts Receivable Inventory Total Current Assets

100,000 20,000 50,000 170,000 ①

Fixed Assets Land and Buildings Equipment Furniture Total Fixed Assets

200,000 100,000 75,000 375,000 ②

Total Assets LIABILITIES Current Liabilities Bank Overdraft Taxes Payable Accounts Payable Total Current Liabilities

20,000 25,000 15,000 60,000 ④

Long-Term Liabilities Long-Term Loan Bonds Payable Total Long-Term Liabilities

50,000 75,000 125,000 ⑤

Total Liabilities

185,000 ⑥ = ④ + ⑤

Capital Equity Retained Earnings Total Capital

300,000 60,000 360,000 ⑦

Total Liabilities and Capital  

$545,000 ③ = ① + ②

$545,000 ⑧ = ⑥ + ⑦

Assets = Total Liabilities + Capital. Alternatively, Assets – Liabilities = Capital

Name of Organization Balance Sheet Period Ended December 31, 202X

Financial Accounting 

 137

TABLE 6.7.  Presentation—Contemporary Statement of Financial Position for a Nonprofit Organization

ASSETS Current Assets Cash and Cash Equivalents Grants Contributions Receivable Total Current Assets Fixed Assets Land and Buildings Equipment Furniture Total Fixed Assets Total Assets LIABILITIES Current Liabilities Loan Payable Payroll Taxes Payable Accounts Payable Total Current Liabilities

200,000 100,000 75,000 375,000 ② $545,000 ③ = ① + ②

20,000 25,000 15,000 60,000 ④

Long-Term Liabilities Long-Term Loan Mortgage Payable Total Long-Term Liabilities

50,000 75,000 125,000 ⑤

Total Liabilities

185,000 ⑥ = ④ + ⑤

Net Assets Unrestricted Permanently Restricted Net Assets

150,000 210,000 360,000 ⑦

Total Liabilities and Net Assets  

100,000 20,000 50,000 170,000 ①

$545,000 ⑧ = ⑥ + ⑦

Assets = Total Liabilities + Net Assets. Alternatively, Assets – Liabilities = Net Assets

Name of Organization Statement of Financial Position Period Ended March 31, 202X

138 

  Concepts and Tools from Accounting BALANCE SHEET FORMULA CONTEMPORARY APPROACH—FOR-PROFITS Total Assets

Current Assets

Fixed Assets or Non-Current Assets

Total Liabilities

Current Liabilities

Long-Term Liabilities or Non-Current Liabilities

Total Equity

Owners Capital

Retained Earnings

$

  FIGURE 6.3    Contemporary balance sheet presentation.

Total Assets As previously mentioned, assets are resources that are owned by an entity (Albrecht et al., 2011). They are classified into two categories: current and fixed (i.e., non-­current assets).

Current Assets Current assets comprise cash assets and assets that can easily be converted into cash. „ Cash and cash equivalents is the most liquid current asset and usually appears as the first line item on the balance sheet. It includes actual physical cash and money in bank accounts. It can also include short-term cash securities (e.g., stocks, short-term deposits, marketable securities) that can be liquidated at short notice. „ Accounts receivable represents all monies that are owed to the entity when the entity supplies goods or services on credit, minus any bad debts (i.e., amounts that were not paid to the entity). In a nonprofit organization, this line item is called Contributions Receivable.

Financial Accounting 

 139

„ Inventory is a record of all goods in stock. In a manufacturing entity, it comprises three items: raw materials, work in progress, and finished goods. Programs providing health services often have considerable inventories of medical supplies. Programs providing human services and education may have minor inventories, primarily office supplies and books. „ Prepaid expenses represent expenses that have been paid in advance of the actual incurrence of the expense. When this expense is realized, it is reported on the income statement, and prepaid expenses are reduced by the equivalent amount on the balance sheet.

Fixed Assets Fixed assets are possessions that require time and effort to convert into cash. They are also known as long-term assets or non-­current assets. They include tangible assets such as land, buildings, machinery, furniture, equipment, and vehicles and intangible identifiable assets such as patents and copyrights. They also include intangible unidentifiable assets, such as goodwill and brand recognition. Tangible assets apart from land are recorded on the balance sheet at cost minus accumulated depreciation (i.e., an amount to reflect reduction in asset value due to normal wear and tear of the asset over time). In the case of the nonprofit organization, non-­current assets may also include assets such as contributions receivable and endowment investments.

Total Liabilities Total liabilities represent all debts or financial obligations of an entity. Liabilities are classified into two categories: current and long term (i.e., non-­current liabilities). „ Current liabilities are short-term financial obligations that are due

within a 12-month period. They include financial obligations to creditors, which are classified as accounts payable on the balance sheet, and other types of financial obligations, such as the current portion of debt, income taxes payable, payroll taxes payable, sales taxes payable, and deferred revenue. „ Long-term liabilities are financial commitments that are due after a year. They include items such as bonds payable, capital leases, pension liabilities, and long-term loans.

140 

  Concepts and Tools from Accounting

Total Equity Equity is essentially the net worth of the entity. It comprises capital investments from various shareholders plus retained earnings that are maintained and not distributed. Put simply, in a profitability company, it is the difference between the organization’s total assets and its total liabilities. Net assets replace equity in a nonprofit organization. Like equity in the profitability company, net assets are the difference between total assets and total liabilities.

NOTES TO FINANCIAL STATEMENTS The notes to the financial statements are an integral and mandatory requirement to ensure compliance with the full disclosure principle. The notes provide clarity and detail on the specific accounting policies used by the company and give insight into the calculations that were used to arrive at exact figures that are detailed on the financial statements. For instance, the notes detail the method used for depreciation valuation and inventory valuation and how depreciation was computed, and inventory was valued, along with other types of computations, such the schedule of retained earnings. If the entity is involved with foreign currency transactions, it would also provide details of the types of transactions, along with the exchange rate used for conversions of foreign currency. In short, the notes to the financial statements are essentially an appendix that detail how major computations that appear on the financial statements were derived. This information is of great interest to analysts, potential investors, and current shareholders, as it allows these stakeholders to properly evaluate an entity’s financial health, as well as its performance. It should be noted that different valuation methods affect net income, which in turn affects dividend payments and stockholders’ worth.

RATIO ANALYSIS Ratio analysis is a quantitative financial performance measurement methodology used for decision making (Persaud, 2009b; Weygandt et al., 2010). Ratio analysis also is used for benchmarking against past program performance and against the performance of other programs addressing similar needs. In industry, this key performance indicator is used by top-level management for long-term strategic planning aimed at growth and market leverage and to assess an organization’s profitability, liquidity, and more

Financial Accounting 

 141

(Persaud, 2009b). It is also of interest to persons external to the organization (e.g., financial analysts, competitors, potential investors). In the nonprofit sector, ratio analysis is similarly used by management for different types of internal decision making (see Figure 6.4). For instance, it is important for nonprofits to evaluate their operations, programs, and services, as well as to assess their financial stability and viability as a going concern. Ratio analysis encompasses five broad categories of ratios, namely, liquidity, efficiency, profitability, leverage, and market value (Corporate Finance Institute, 2020; see Figure 6.5). Each category comprises several different types of ratios that provide very different kinds of information (Albrecht et al., 2011). It is therefore important to determine information needs and what you are trying to learn, so that the correct ratios can be computed. Ratio analyses are based on current and historical data from balance sheets and income statements. This measurement matrix expresses a relationship between two quantities using one of three forms: percentages, rates, or proportions (Persaud, 2009b; Weygandt et al., 2010). Commonly used ratios in the nonprofit sector, along with formulas and interpretation, are shown in Table 6.8.

Comparisons

RATIO ANALYSIS

Trends

Operating Efficiency

• Evaluating a program's financial performance in comparison to similar programs. • Useful for long-term strategic planning and market leverage.

• Studying trends within a particular program or with similar programs over a specific time period. • Useful for predicting future financial performance. • Focusing on how to improve a program's operational efficiency in the short term. • Useful for maximizing the efficiency of assets in relation to liabilities.

  FIGURE 6.4    Types of decisions facilitated by ratio analysis.

142 

  Concepts and Tools from Accounting COMMON FINANCIAL RATIOS

Liquidity Ratios

Efficiency Ratios

Profitability Ratios

Leverage Ratios

Market Value Ratios

Current Ratio

Asset Turnover Ratio

Gross Margin Ratio

Debt Ratio

Book Value per Share Ratio

Acid Test (Quick) Ratio

Inventory Turnover Ratio

Return on Assets Ratio

Debt to Equity Ratio

Dividend Yield Ratio

Cash Ratio

Day’s Sales in Inventory Ratio

Return on Equity Ratio

Interest Coverage Ratio

Earnings per Share Ratio

Operating Cash Flow Ratio

LIMITATIONS OF FINANCIAL RATIOS • Only suitable for comparison with similar competitors. • Organization size is not considered. • Ignores the impact of inflation and other external factors. • Does not consider that different accounting policies may have been used by competitors.

Price Earnings Ratio

  FIGURE 6.5    Useful ratios for nonprofit evaluation.

TABLE 6.8.  Typical Financial Ratios Used to Measure Nonprofit Financial Health Ratio

Formula

Interpretation

Current Ratio

Current Assets ÷ Current Liabilities

Indicator of ability to pay debt obligations that are due within a year.

Acid Test Ratio

(Current Assets – Inventory) ÷ Current Indicator of immediate short-term Liabilities liquidity.

Net Working Capital Current Assets – Current Liabilities Ratio

Indicator of ability to meet short-term debt obligations.

Change in Net Assets Ratio

(Change in Prior Year Net Assets – Current Year Net Assets) ÷ Prior Year Net Assets × 100

Indicator of ability to spend ≤ (less than or equal to) earnings to avoid debt incurrence.

Operating Margin Ratio

Operating Income ÷ Revenue

Indictor of ability to support operations and survive as a going concern.

Operating Reliance Ratio

Total Unrestricted Program Revenue ÷ Indicator of ability to pay annual Total Expenses expenses solely from annual revenues.

Fundraising Efficiency Ratio

Contributions Received ÷ Expenses Incurred for Fundraising

Indicator of ability to efficiently raise money. (continued)

Financial Accounting 

 143

TABLE 6.8.  (continued) Major Users of Financial Ratios Liquidity Ratios:

Banks, Creditors, Suppliers

Efficiency Ratios:

Management, Investors, Shareholders

Profitability Ratios:

Management, Investors, Shareholders

Leverage Ratios:

Management, Investors, Creditors

Market Value Ratios: Management, Investors, Shareholders Note. All ratios should be compared with industry averages to get a clearer perspective on your organization’s financial health.

CASH BUDGET A cash budget is the most important financial budget of an organization (Kinney, Prather-­K insey, & Raiborn, 2006). It is essentially a detailed estimate (forecast, plan) of sources of cash and uses of cash over a specific time period. It presents a numerical picture of projected income and expenditure. In large organizations, it is often prepared using information from ancillary budgets. A cash budget comprises three sections. The Cash Receipts section details all projected cash inflows. The Cash Disbursements section details all projected cash outflows. The final section, Financing, details how a deficit will be financed if there is a shortfall between revenue and expenditure by showing the amount that will need to be borrowed, along with repayments on interest and principal (Garrison et al., 2017). A cash budget (see Table 6.9) is a very practical tool that can be used by anyone in their daily lives. In fact, creating a cash budget at the start of each year is an excellent way for persons to manage their money wisely. In the case of a personal cash budget, your total available cash will be your savings and salary, and your total disbursements will be all personal expenditures such as food, utilities, rent or mortgage, miscellaneous, medical, and so on. Deficits or shortfalls for an individual are usually financed with a personal credit card. A cash budget is an extremely useful tool for helping organizations to prioritize expenditure (Weygandt et al., 2010). This may be of particular importance in smaller organizations and nonprofits that may not necessarily wish to borrow to finance expenditure. A cash budget can also help an organization with strategic planning. For example, a program’s projected expenditure could be studied to determine whether there are alternative ways to deliver services at lower costs. The value of a cash

144  xxx xxx xxx

④ ⑤=④

⑥ = ③ –⑤

⑦ ⑧ ⑨ ⑩ ⑪=⑧+⑨ +⑩

⑫=⑥±⑪

Excess (Deficiency)

Financing Borrowing Repayment Principal (Bracket Figure) Repayment Interest (Bracket Figure) Total Financing

Ending Cash Balance

xx

xxx

xx

xxx xxx

xx xxx xxx

Mar

xx

xxx xxx

xx xxx xxx

Apr

xx

xxx xxx

xx xxx xxx

May

xx

xxx xxx

xx xxx xxx

June

xx

xxx xxx

xx xxx xxx

July

xx

xxx xxx

xx xxx xxx

Aug

xx

xxx xxx

xx xxx xxx

Sept

xx

xxx xxx

xx xxx xxx

Oct

xx

xxx xxx

xx xxx xxx

Nov

xx

xx

xx

xx

xx

xx

xx

xx

xx

xx

Repayments are generally made at the end of a month, end of 6 months, end of year, etc. Interest is calculated on period money is held.

Note. Financing may not be required for every month.

xx

xxx xxx

xx xxx xxx

Feb

xx

xxx xx xxx

xxx

xxx xxx

xx xxx xxx

Dec

Note. A cash budget for a nonprofit organization has only two sections: cash receipts and cash disbursements. It does not have a financing section. Cash budgets are generally prepared using a 1-year period, which may be divided into months or quarters depending on the level of detail needed.

Less Cash Disbursements (Itemize) Total Cash Disbursements

xxx

xxx xxx xxx

① ② ③=①+②

Beginning Cash Balance Add Cash Receipts (Itemize) Total Available Cash

Jan

Cash Budget Period Ended December 31, 202X

Formula

TABLE 6.9.  Presentation—Cash Budget

Financial Accounting 

 145

budget is highly dependent on the quality of data entered. Thus, if a cash budget is poorly conceptualized, it will not be very useful and can cause the organization to get into serious financial trouble and threaten its status as a going concern.

SUMMARY This chapter introduces readers to financial accounting concepts and tools. Although some critics argue that financial accounting has nothing to do with evaluation and is not the business of the evaluator, we proffer that financial accounting, in fact, provides the raw and basic data needed for all types of cost-­inclusive evaluations. As such, program administrators should understand financial accounting, as it provides the information to benchmark against peer competitors offering the same services, facilitates determination of the financial health of the organization, and provides the data needed for forecasting and strategic planning. Program evaluators, for their part, can also benefit from understanding financial statements. In addition to containing a wealth of credible data that can be extracted and used in many types of cost-­inclusive evaluations, these statements can also provide useful contextual information about program operations. Sourcing certain types of data from financial statements can therefore reduce both the time and cost of data collection. This chapter begins by helping readers to understand the process that is used by organizations to record and store accounting data. The process begins with journalizing, using a double-­entry system that shows the debit and credit effects of every transaction in every account. This record is made in a general journal, which is the first book of entry. The second book of entry is the general ledger, which provides a summary grouping of all accounts classified into five categories: revenues, expenses, assets, liabilities, and equity. The trial balance is the third book of entry, providing a listing of the organization’s accounts and the balance in each account. In all three books, the concept of the double-­entry system is maintained: Debits must equal credits. The trial balance feeds into the preparation of the financial statements, a mandatory requirement for all organizations. These statements comprise: 1. The income statement (referred to as a statement of activities in a nonprofit entity), which answers the what happened question. It provides a summary of the entity’s revenue and expenditure for the financial year and shows any resulting profit (or gain in a nonprofit) or loss.

146 

  Concepts and Tools from Accounting

2. The statement of cash flows also answers the what happened question. It shows where cash originated from and how it was used. 3. The balance sheet (referred to as the statement of financial position in a nonprofit entity) answers the where the organization is now question. This statement provides a snapshot of the entity’s financial worth at a specific point in time by showing the organization’s assets (what the entity owns), its liabilities (what the entity owes), and its capital or equity (net assets in a nonprofit entity). 4. Notes to the financial statement detail the accounting policies used to prepare the financial statements, as well as the calculations for some figures shown in the financial statements. Chapter 6 also discusses ratio analysis, which is useful for benchmarking against past program performance and against competitor programs offering similar services. Like profitability organizations, nonprofits can also use ratio analysis for making comparisons to competitors, for predicting trends, and to improve operational efficiency. This discussion also highlights that ratios can be used to determine liquidity, efficiency, profitability, leverage, and market value. Common ratios that are used to measure the financial health of a nonprofit are also summarized, with formulas and interpretations. The chapter concludes by discussing the cash budget, which essentially is a detailed 12-month estimate of sources of cash and uses of cash. This budget is particularly helpful for prioritizing expenditures.

Financial Accounting

147

DISCUSSION QUESTIONS

(1) Financial statements present a wealth of cost information for an organization. (a) Critically discuss as a class how program evaluators can use this informa‑ tion in a cost‑inclusive evaluation. (b) This chapter stressed that program administrators should also be able to interpret financial information. Why is this important? (2) Many organizations that commission evaluations would like to have com‑ prehensive evaluations but are often reluctant to share cost data. This chap‑ ter  has highlighted that it is mandatory for all organizations (including non‑ profits) to submit documentation to the internal revenue service in their country. In the United States, this is done via IRS Form 990, which is a public document. (a) Visit www.irs.gov/pub/irs-pdf/f990.pdf and download and scrutinize the types of information that must be submitted on Form 990. (b) In light of the fact that Form 990 is a public document, do you think that evaluators can use this in negotiations to convince clients that they should not be afraid to share cost data? (c) Suppose the client still refuses to share cost data and the evaluator decides to follow the process to publicly view the organization’s Form 990. If the evaluator then uses information from Form 990 to conduct a cost‑inclusive evaluation, would this be unethical? Why or why not? Hint: Your discussion should consider universal best practice standards that govern conduct of an evaluation. (3) As a class, download financial statements for three nonprofits that you can access. Study the presentation of the financial statements to see the types of information presented and how the information is presented. Using the formulas provided in the chapter to measure nonprofit financial health, use one of the downloaded financial statements and individually per‑ form the following calculations: (a) Current ratio (b) Net working capital (c) Operating margin ratio (d) Fundraising efficiency ratio

148

CONCEPTS AND TOOLS FROM ACCOUNTING

(e) Analyze the ratios. Comment on the financial health of the organization. (f) Share your individual responses with the class. Did you find it challenging to find the information needed for input into your ratio formulas? Hint: Some balance sheets show only total assets and total liabilities, so you may need to extract the current assets and current liabilities. (4) Financial statements and Form 990 provide rich financial information that can be useful for benchmarking performance against peer organizations. Discuss how benchmarking can enhance an evaluation report.

C H A P T E R

7

Cost and Management Accounting Concepts and Tools

T

he first six chapters have provided insight and detail about why cost-­ inclusive evaluation is important, the many and varied issues that need to be considered when doing this type of evaluation, the many different types of costs and benefits that can be considered in these evaluations, and the importance of having a good understanding of costs and benefits data. This knowledge is important for obvious reasons. To do cost-­inclusive evaluation properly and to make use of the wide array of cost-­analytical methodologies that can help programs strategically use cost information to serve more and do more societal good, those involved in program administration and program evaluation need to have a good understanding of the many issues that feed into cost-­inclusive evaluations. In the past, cost-­inclusive evaluations tended to use economic appraisal methods (Persaud, 2021). However, the objective of this book is to illustrate to program evaluators and program administrators that there are many other ways to analyze accounting data that can really enlighten decision making. For example, we have discussed why nonmonetary resources and nonmonetary outcomes (effectiveness) are important in cost-­inclusive evaluation. We have shown how financial accounting information can be analyzed with a variety of ratios for strategic planning purposes. By helping program administrators see how their programs measure up to their competitors, we hope to help them to make program operations more efficient. The current chapter introduces readers to what are likely a new set of concepts that traditionally have been used to maximize profits. Although the word “profitability” may sound distasteful to many in health and human services, we show how cost and management accounting concepts and tools can be adapted and used to effectively enhance nonprofit decision making.



 149

150 

  Concepts and Tools from Accounting

HOW COST AND MANAGEMENT ACCOUNTING CAN ENHANCE DECISION MAKING AND COST‑INCLUSIVE EVALUATION Our world economy has been devastated by the 2020 COVID-19 global health pandemic. It shut down borders, created dramatic levels of global unemployment, and eroded decades of economic development and progress. As the world attempts to survive and rebuild economies, suffering on our planet has escalated. As millions lose their livelihoods and face an uncertain future, social problems are also escalating across much of the world. Stress, anxiety, and depression from social isolation, deep concern about the future, and loss of loved ones have caused explosive growth in social problems. At the same time, and for similar reasons, the world economy is in such a fragile state that money needed to address social programs is becoming harder to secure. In this extremely challenging environment, we can no longer adopt an attitude of business as usual. The reality of scarce economic resources and mounting social problems necessitates that social program decision makers and program evaluators utilize every conceivable tool that can aid decision making so that programs can serve more at lower costs. This process can be considerably aided by adopting and adapting many cost and management accounting methodologies to enhance and shed light on decision making (Persaud, 2021). As highlighted in Chapter 6, we do have critics who feel that financial accounting and cost and management accounting is not the business of the evaluator. However, as stressed in Chapter 6, learning is an evolving process, and therefore evaluators must embrace new tools by stepping outside of their comfort zones. In this environment, program evaluators, as well as program administrators, need to be consciously thinking about how to improve program operations. In the face of rising costs, reduced budgets, greater demand for human services, and increasing competition, a multifaceted approach is needed to keep participants’ costs as low as possible and to ensure continuity, sustainability, and survival of programs in an increasingly challenging and complex environment. Program evaluators and administrators need to utilize the tools that can lead to “better control of costs . . . greater cost efficiency, and cost optimization” (Persaud, 2021, p.  3). All tools that can produce more meaningful and insightful cost data will need to be utilized to get the job done. Value-for-money considerations are taking center stage in many funding deliberations. In today’s environment, the program evaluator’s role can no longer be confined to merely formulating judgments about whether a program is “good or bad, effective or ineffective. There may be many other ways of doing the same thing—­equally effective or more effective—­ for considerably lower costs” (Persaud, 2021, p.  3). This chapter shows

Cost and Management Accounting 

 151

readers how cost and management accounting tools can inform strategic planning and decision making. Economic cost-­ analytical techniques and financial accounting will continue to play an important role in cost-­inclusive evaluation. “However, neither offers insight into cost behavior—­an important consideration for making program operations more efficient and for long-term strategic planning, forecasting, and design of program operations” (Persaud, 2021, p. 1). This is the role of cost and management accounting. Specifically, those managing and evaluating programs need to thoroughly understand fixed and variable costs and how these costs affect outcomes. Understanding the cost drivers for programs is also important, as increasing costs have implications for outcomes and the societal good that can be done (Persaud, 2021). Moreover, an organization’s cost structure has major implications for program sustainability, as well as for how programs will weather future external shocks that are almost certain to come soon to most countries. Understanding these issues is thus the key to the effective utilization of cost and management accounting methodologies that are so critical to good decisions. In this new-­normal environment, we need to put aside personal prejudices. Instead, let us embrace and capitalize on all of the tools that can enable us to make wiser decisions so that we can do more good with our limited resources.

UNDERSTANDING COST BEHAVIOR Cost behavior refers to how fixed and variable costs of programs react to changes in activity level, output, or volume (see Table 7.1 and Figure 7.1). As explained in Chapter 2, within the relevant range, fixed costs remain constant in total regardless of activity level. When expressed on a perunit basis, fixed costs become smaller with greater activity or volume. In contrast, variable costs behave in the opposite manner, becoming larger with greater activity or volume. When expressed as cost per unit, variable costs are constant within the relevant range (Garrison et al., 2017; Persaud, 2009c, 2020, 2021). A good understanding of cost behavior is fundamental to several types of cost and management accounting analyses, discussed later in this chapter. More importantly, understanding cost behavior helps with controlling costs, which is an essential consideration when planning and managing a program. It also provides insight for strategic planning purposes that involve expansion and downsizing and product or service pricing. Additionally, it is useful for budget preparation, as it enables assessment of which costs will fluctuate with activity and which costs will remain the same.

152 

  Concepts and Tools from Accounting TABLE 7.1.  Cost Behavior with Different Activity Levels Participants Participants Participants 50 100 1,000 Behavior Total Variable Costs @$120

$ 6,000

$ 12,000

$ 120,000

Varies  

Fixed Costs

$ 25,000

$25,000

$ 25,000

Constant

Total Costs

$ 31,000

$ 37,000

$145,000

Variable Costs

$

120

$

120

$

120

Constant

Fixed Costs

$

500

$

250

$

25

Varies  

Unit Total Costs

$

620

$

370

$

145

Per Unit

Note. Greater output or volume optimizes unitized fixed costs as the costs are being more efficiently utilized. For example, fixed costs such as rent, insurance costs for furniture and fixtures, utility costs for running the facility, and security guards will not increase whether 50 participants or 1,000 participants use the facilities. Thus, when output is 50 participants, per-unit fixed costs are $500, when output is 100 participants, per-unit fixed costs are $250, and when output is 1,000 participants, per-unit fixed costs are only $25. More efficient utilization of fixed costs leads to an overall reduction in unit total costs. Thus, when only 50 participants are served, unit total costs are $620. However, when 1,000 participants are served, this cost drops to only $145 per participant. This is powerful information for strategic planning purposes.

For example, total costs can be projected for different levels of planned program activities (within the relevant range) using the simple algebraic equation Y = a + bX (see Figure 7.2; note that for ease of reference, all formulas used in this chapter are summarized at the end of the chapter). Note that the variable costs of $120 in Figure 7.2 may comprise one or more variable costs, such as medications and time worked by hourly contractors, and fixed costs would comprise all the fixed costs incurred by the entity, such as space leased, utilities, and staff salaries plus benefits. Keep in mind that if there is no program activity (e.g., if a program was temporarily closed or in pandemic lockdown), few or no variable costs would be incurred. However, fixed costs would still be incurred in the short term unless program operations are terminated immediately.

RELEVANT RANGE The relevant range is a significant concept in cost and management accounting. This range sets the boundaries within which existing operations can be executed without increasing operational costs (Datar & Rajan,

VARIABLE COST BEHAVIOR IN THE RELEVANT RANGE

Cost

Cost

IN TOTAL

Varies becoming higher with greater volume

ON A PER-UNIT BASIS Constant Activity Level (Volume)

Activity Level (Volume)

FIXED COST BEHAVIOR IN THE RELEVANT RANGE

ON A PER-UNIT BASIS

Constant

Cost

Cost

IN TOTAL

Activity Level (Volume)

Varies becoming smaller with greater volume

Activity Level (Volume)

FIGURE 7.1 Cost behavior graphical example. EXAMPLE OF COST BEHAVIOR IN SERVICE INDUSTRY Algebraic Equation for Estimating Total Costs for an Activity Level Y = a + bX Total Costs = Fixed Costs + Variable Costs (Activity Level) Fixed Costs = $25,000 Variable Costs = $120 per participant When activity is 0, 300, and 800 participants, respectively, Total Costs = $25,000 + 120 × 0

= $ 25,000

Total Costs = $25,000 + 120 × 300 = $ 61,000 Total Costs = $25,000 + 120 × 800 = $ 121,000

An important assumption is that fixed costs are constant within the relevant range.

FIGURE 7.2 Estimating total costs for new activity levels. 153

154

CONCEPTS AND TOOLS FROM ACCOUNTING

2018; Persaud, 2009c, 2020, 2021). For example, if the maximum capacity for current operations is 1,000 participants and the program can serve 1,400 participants instead, the program would need to determine whether it would be beneficial to exceed the current relevant range (see Figure 7.3). If the contribution margin from service to an additional 400 participants cannot offset the increase in fixed costs, then the expansion should not be undertaken. Table 7.2 shows the contribution format income statement 1 when 1,400 participants are served and the service fee per participant is $160. Observe that this would be a good strategy, as surplus will increase by $6,000 if the expansion is undertaken— a 40% increase in surplus. Suppose instead that the increase in fixed costs was considerably higher. If this were the case, we would not proceed with the expansion unless we could guarantee that we could at least break even (see later discussion). To offset the increase in fixed costs, we could also increase service fees charged to participants. However, the goal in many social programs

Current operations are utilizing 1,000 square feet of space at a cost of $10,000. The maximum capacity for this space is 1,000 participants.

COST

$30,000

$20,000

$10,000

There is potential to serve 1,400 participants. This will require exceeding the current relevant range.

New Relevant Range Current Relevant Range 1,000

2,000

3,000

RENTAL SPACE (square feet)

Question: Can the increase in revenue from an additional 400 participants offset the increase in rental for the additional 1,000 square feet of space? Alternatively, is it possible to serve more than 1,400 participants?

FIGURE 7.3 The relevant range and its implications.

1A

contribution format income statement is a statement that separates variable costs from fixed costs. Contribution margin is obtained by subtracting variable costs from revenue. Surplus is obtained by subtracting fixed costs from contribution margin.

Cost and Management Accounting 

 155

TABLE 7.2.  Implications of Exceeding the Relevant Range SURPLUS EARNED WHEN 1,400 PARTICIPANTS ARE SERVED 1,000 Participants

1,400 Participants

Revenue (Participant Fees) @ $160

$160,000

$224,000

– Variable Costs @ $120

$120,000

$168,000

= Contribution Margin

$ 40,000

– Fixed Costs

$ 25,000

= Surplus

$  15,000

$ 56,000 10,000 

$  35,000 $ 21,000

$21,000 - $15,000 = $6,000/ $15,000 × 100% = 40%  in Surplus  

is to keep service fees as low as possible. Note that exceeding the relevant range to serve an additional 400 participants may also trigger other fixedcosts increases that would need to be considered. Program administrators could also examine how unit total costs would change for different activity levels ranging from 1,100 to 2,000 participants. They also could examine the overall surplus that would be generated from moving from 1,000 up to 2,000 participants. For example, if variable costs remained the same and fixed costs increased by $10,000 to accommodate the additional 1,000 participants, the new unit total costs for an activity level of 2,000 participants would be $137.50 [(fixed costs $25,000 + $10,000 increase / 2,000 participants) + Variable Costs $120]. This compares to $145 when 1,000 participants are served (see Table 7.1). Another strategy is to pass on cost savings from the optimization of fixed costs to the participants. If the program could serve 2,000 participants instead of 1,000 participants, unit participant fees could be reduced from $160 (see Table 7.2) to $145. In this scenario, fixed costs increased by $10,000; however, the program was still able to reduce unit participant fees by $15 and earn the same surplus as when 1,000 participants were served (see Table 7.3). This is powerful information for strategic planning purposes.

UNDERSTANDING PROGRAM COST OR ACTIVITY DRIVERS Similar to the insight that is derived from having a good understanding of cost behavior and the relevant range, understanding a program’s cost drivers or activity drivers is also helpful. A cost driver or activity driver may be defined as something that triggers a change in costs (McKinney, 2004; Per-

156 

  Concepts and Tools from Accounting TABLE 7.3.  Optimization of New Relevant Range SURPLUS EARNED WHEN 2,000 PARTICIPANTS ARE SERVED 1,000 Participants

2,000 Participants

Revenue (Participant Fees)

  $160,000a

  $290,000b

– Variable Costs @ $120

$120,000

$240,000

= Contribution Margin

$ 40,000

– Fixed Costs

$ 25,000

= Surplus

$  15,000

a1,000

$ 50,000 10,000 

$  35,000 $  15,000

participants x $160 = $160,000 (Original Fee)

b 2,000

participants x $145 = $290,000 (Cost Savings of $15 Passed on to Participant)

saud, 2020, 2021; Sheng, 2009), causing costs to increase or decrease. Common cost drivers include student enrollment, participants served, and products produced. Organizations or programs can have multiple cost or activity drivers operating concurrently. For EXAMPLES OF COST DRIVERS example, educational institutions have cost drivers such as enrolled students, number of Manufacturing athletes, and employee turnover. • Machine hours Understanding program cost drivers can • Labor hours help a nonprofit institution serve more partici• Product returns pants with the same resources or help a for-­ • Number of inspections profit organization become more profitable (or Hospital less in debt). A proper understanding of what • Number of patient days drives costs can guide efforts to control those • Number of hospital beds available costs more rationally, by reducing and/or elim• Complexity of services inating unnecessary costs in the correct areas (Persaud, 2009c). Knowing cost drivers also Clinic • Number of patients can aid budget projections. Understanding what drives costs can provide insights about Education the optimal service mix and whether it would • Number of students be advantageous to outsource some services or Pizza Restaurant products. • Pizzas sold For example, another entity that special• Pizza deliveries izes exclusively in the production of a particular product or the delivery of a particular service may be able to do so at considerably lower cost; if so, it could be advantageous to subcontract the product or service. A manufacturing entity producing computers, for instance, may find it cheaper to subcontract certain parts rather than to try to produce all parts internally. Likewise, a clinic may find it cheaper to subcontract psychological assessments, drug

Cost and Management Accounting 

 157

testing, or computerized tomography scans rather than performing these services internally.

UNDERSTANDING PROGRAM COST STRUCTURE Simply put, cost structure refers to the proportion of fixed costs in comparison with variable costs in a program; Garrison et al., 2017; Persaud, 2020, 2021). An organization’s cost structure is defined by factors such as the type of service or industry and the nature of the service or product offered (Persaud, 2009c). For example, technology, manufacturing, and many types of service industries all tend to have COST STRUCTURE a high proportion of fixed to variable costs. The Proportion of These fixed costs typically are high-­ priced Fixed Costs: Variable Costs expenditures for salaries and benefits, leases, furniture, machinery, equipment, and debt serFixed Costs = $ 100,000 vice. Consider the high proportion of fixed costs Variable Costs = $ 20,000 in universities, such as faculty and staff salaries Fixed Costs : Variable Costs and benefits, contracted information services 83% : 17% for website development, building maintenance, and interest payments on bonds for building renovation and construction. In contrast, enterprises with more variable costs relative to fixed costs would include many operations in which persons are self-­employed and work from home (e.g., therapists, tutors, typesetters, data analysts). Such services are billed by the hour, that is, by time spent with a participant and in directly related activities, such as interactions with other professionals on behalf of the participant. Fixed costs for persons working from home are already a part of their normal household expenditure. As such, their cost structure is heavily weighted toward variable costs. Participants may also be billed for a small proportion of enterprise “overhead” such as Wi-Fi and telecommunications. If these costs vary for each participant, they would be considered as variable costs. Otherwise, they would be considered as fixed costs. Regardless of whether a program is for profit or not, program administrators still need to ensure that program operations are carried out in the most efficient manner to either maximize surplus or allow more participants to be served to do more societal good. Therefore, as a decision maker it is critically important to understand a program’s cost structure to better enhance strategic planning. Recall that variable costs are only incurred if activity takes place. However, fixed costs are incurred in the immediate to medium term even if no activity takes place. To contextualize, if an individual ran into financial difficulty due to job termination, the person

158 

  Concepts and Tools from Accounting

could reduce spending on food, clothes, and entertainment but would not be able to cut expenditure on the house mortgage or vehicle loan without forfeiture. When thinking about cost structure, one also needs to compare committed fixed costs to which one is already obligated (e.g., house mortgage) versus more discretionary fixed costs, such as a month-to-month subscription for entertainment or part-time employees. In a program context, committed fixed costs are those that cannot be discontinued in the short term (e.g., rent, salaries plus benefits), whereas discretionary fixed costs are those that can be deferred with less immediate destructive effect (e.g., advertising, research and development) or can be completely avoided. The cost structure of an organization has major implications for its performance during economic booms and recessions. Specifically, organizations with a high proportion of fixed costs will generally benefit from higher profits during an economic boom when sales are high. However, the same organizations are more vulnerable during periods of economic downturn (Persaud, 2009c, 2020), even risking bankruptcy. A case in point was the economic recession triggered by the COVID-19 global health pandemic, which necessitated the mandatory—­if temporary—­ closing of businesses everywhere. The continuation of or rapid return to closing for more than one time period during the year resulted in many enterprises having to cease operations permanently. Although service providers such as restaurants, hairdressers, and other small businesses did not incur variable costs during the lockdown, they still had to pay essential fixed costs such as security costs for their premises, rent, insurance, and salaries for permanent staff. Lack of income, combined with a high proportion of fixed costs, led to many small enterprises and even larger enterprises suffering tremendous losses.

BREAK‑EVEN ANALYSIS Break-even analysis is one of the many accounting tools than can be used by decision makers for strategic planning (Persaud, 2020, 2021). It is also useful for measuring the crisis point of an organization (Alnasser, Shaban, & Al-Zubi, 2014). Regardless of whether an organization’s mission is to earn a profit, to recoup costs (i.e., break even), or earn a small income so a program is sustainable, break-even analysis can provide real insights for decision making. At the break-even point, a program is making neither a surplus nor a loss (see Figure 7.4), and revenue is exactly equal to fixed costs plus variable costs (Persaud, 2020, 2021). Specifically, the break-even point shows the minimum participants or sales needed to recover program costs. Activity below the break-even point will result in a loss, whereas

Cost and Management Accounting

159

Revenue

BREAK-EVEN POINT

Profit

$$$$

Loss • Expenses = Revenue • No Profit or Loss • SURPLUS = 0

Activity

FIGURE 7.4 Break‑even point graphical example.

activity above the point will result in profit (Garrison et al., 2017; Persaud, 2020), that is, a surplus, or a gain in a nonprofit. This type of analysis is also useful as it can tell you the exact activity needed to achieve a specific amount of gain. As all nonprofits still need to recover their costs and perhaps show a small return, break-even analysis can also be of great value to these types of organizations. Break-even analysis also provides great insight into cost-volume profit analyses, discussed shortly. The break-even point can be computed in either units (e.g., participants served, products sold) or money (participant fees, sales dollars). Continuing with the information used in earlier illustrations, the break-even point in units and dollars when participant fees are $160, variable costs per participant are $120, and fixed costs are $25,000 will be 625 participants, or $100,000 in revenue (participant fees), as shown in Figure 7.5. The contribution margin per unit is a powerful number. It essentially enables decision makers to calculate the profit or loss for any activity level in seconds without having to prepare a statement that separates variable from fixed costs, that is, a contribution format income statement. Participants above the break-even point will earn the program $40 in surplus per participant. Participants below the break-even point will incur the program a $40 loss per participant. The following formula provides a shortcut to determine a program’s surplus or loss without having to prepare an entire contribution format income statement: [(Projected Activity – Break-even Activity) × Unit CM]. Thus, if 700 participants are served, the organization would make $3,000 in surplus, that is, (700 – 625) × $40. If only 600 participants are served, the program would incur a loss of ($1,000) instead, that is, (600 – 625) × $40. Readers can verify the aforementioned by simply doing an actual contribution margin income statement with 700 and 600 participants, respectively.

160 

  Concepts and Tools from Accounting

If the entity is a nonprofit and simply wants to break even, then the entity would need to serve exactly 625 participants. If, instead, a specific amount of surplus was desired to ensure viability and sustainability, which are so important today (McKinney, 2004), then this could also be easily determined (see Figure 7.6). For example, if the entity wanted to earn $10,000 in surplus, the organization would need to serve 875 participants. Break-even analysis, then, can be quite insightful and informative, helping decision makers and program evaluators to strategize to reduce the risk of program failure. Today, decision makers are under constant pressure to ensure that their programs are, at a minimum, recovering their total costs. In this new environment, money is in short supply. At the same time, the reliance on social services is on the increase. If programs cannot at least sustain themselves, they will likely be discontinued. To prevent program

$

%

Revenue

$160

100

– Variable Costs

$120

  75

= Contribution Margin [CM]

$ 40

 25

FORMULA

FORMULA

Break-Even Units = Fixed Expenses/Unit CM = $25,000/$40 = 625 participants

Break-Even Dollars = Fixed Expenses/CM Ratio = $25,000 /.25 = $100,000

PROOF: Contribution Format Income Statement Total Number of Participants Served

Unit Participant Costs

%

625

Revenue (Participant Fees)

$100,000

$160

100

– Variable Costs

$ 75,000

$120

 75

= Contribution Margin [CM]

$ 25,000

$ 40

 25

– Fixed Costs

$ 25,000

= Surplus

$      0 At the break-even point, costs are exactly equal to revenue earned.

  FIGURE 7.5    Break-even analysis computations.

Cost and Management Accounting 

 161

Break-Even Units = Fixed Expenses + Target Surplus/Unit CM = $25,000 + $10,000/$40 = 875 participants Break-Even Dollars = Fixed Expenses + Target Surplus/CM Ratio = $25,000 + $10,000 /.25 = $140,000 PROOF: Contribution Format Income Statement Total Number of Participants Served

Unit Participant Costs

%

875

Revenue (Participant Fees)

$140,000

$160

100

– Variable Costs

$105,000

$120

 75

= Contribution Margin [CM]

$ 35,000

$ 40

 25

– Fixed Costs

$ 25,000

= Target Surplus

$  10,000

  FIGURE 7.6    Break-even participants to achieve target surplus.

termination, decision makers have a responsibility to utilize all tools that can give them insight on how to make their operations more efficient so that they can survive and continue to contribute to society.

COST‑VOLUME‑PROFIT ANALYSIS Cost-­volume-­profit analysis is another powerful cost and management tool that, like break-even analysis, has traditionally been used in private enterprise. Although inclusion of “profit” in the name of this tool may be distasteful to some, it can be quite useful in nonprofit programs trying to understand how changes in one or more operations or financial factors will likely affect their financial well-being. Seemingly small changes in participant fees, fixed program costs, variable program costs, and participant engagement levels can interact to determine the very survival of a program. This is where cost-­volume-­profit analysis can really help program managers. Cost-­volume-­profit analysis essentially studies the relationship between revenue and expenditure in the short term and how this affects profit (Abdullahi, Sulaimon, Mukhtar, & Musa, 2017; Albrecht et al., 2011;

162 

  Concepts and Tools from Accounting

Bragg, 2019; Garrison et al., 2017; Horngren, Datar, George, Rajan, & Ittner, 2008) or surplus in a nonprofit. Cost-­volume-­profit analysis is intricately linked to break-even analysis and the contribution format income statement. It uses sensitivity analyses to explore effects of key management assumptions pertaining to costs, volume, product, or services mix and selling price (Garrison et al., 2017; Persaud, 2020, 2021) or participant fees that should be charged. Building on the previous examples, Figure 7.7 shows how cost-­volume-­ profit analysis can enhance social program decision making. Assume that the program has received complaints about participant fees and would like to reduce fees by 20%. Can the program afford to do so? Management would like to know the break-even point if this fee reduction is implemented with no changes in variable and fixed costs. Assume also that organizational policy stipulates that all programs must make a small return (at least $1,000) to continue in operation. This policy was implemented to help programs to become self-­sustainable over the medium to long term.

$

%

Revenue ($160 – 20%)

$128

100

– Variable Costs

$120

 94

= Contribution Margin [CM]

$  8

  6

Break-Even Units = Fixed Expenses + Target Surplus / Unit CM = $25,000 + $1,000/$8 = 3,250 participants PROOF: Contribution Format Income Statement Total Number of Participants Served

Unit Participant Costs

%

3,250

Revenue (Participant Fees)

$416,000

$128

100

– Variable Costs

$390,000

$120

 94

= Contribution Margin [CM]

$ 26,000

$  8

  6

– Fixed Costs

$ 25,000

= Target Surplus

$  1,000

  FIGURE 7.7    Break-even analysis with fee reduction and target surplus.

Cost and Management Accounting 

 163

As Figure 7.7 shows, 3,250 participants will need to be served to get the desired target surplus of $1,000. The calculations shown in Figure 7.7, however, make the assumption that the program’s fixed costs remain constant and could accommodate 3,250 participants. However, based on the hypothetical data shown in Figure 7.3, service to this quantity of participants would actually necessitate that the program rent additional space, as the current program operations can only accommodate 1,000 participants in the existing 1,000 square feet of rented space. To accommodate 3,250 participants, the program will need to rent at least 2,250 square feet of additional space. However, because space can only be rented in increments of 1,000 square feet, the program will need to rent an additional 3,000 square feet, resulting in an increase of $30,000 in fixed costs. This will increase the break-even volume beyond 3,250 participants. This increased volume would also likely require an increase in administrative staff and possibly other fixed expenses as well, as it represents more than a 200% increase in participants. Note, also, that without concrete evidence of massive unmet need for program services, there is no guarantee that such a large clientele would use the program. Suppose instead that management decides to examine a proposal that would reduce participant fees by 10% rather than 20%. Also, assume that variable costs could be reduced by 15% through cost savings such as more homework on the part of the participant and less in-­clinic time with a counselor. Working these new figures through the same basic formulas as before, we find that the new break-even point is only 619 participants (see Figure 7.8)—far more feasible and realistic! This newer, more modest reduction in participant fees, combined with a reduction in variable costs, is definitely something to consider. As it stands, the current relevant range of 1,000 square feet of space can accommodate up to 1,000 participants before additional rental costs will be incurred for more square footage of space. If the maximum number of participants that the existing space can accommodate could be served, the organization would generate net positive surplus of $17,000 (see Table 7.4), well above the minimum positive balance (i.e., $1,000) required by the organization in which the program operates. As should be evident by now, cost-­volume-­profit analysis can be very helpful for examining the financial consequences of making basic changes in program operations. Likely effects of changes in multiple variables, such as fixed and variable costs and participant load, can be examined in the analysis. By varying assumptions and examining changes in expenditures and revenues, program administrators can readily and safely test effects of different options for reducing expenditures, increasing revenues, or possibly both. For instance, if the 15% reduction in variable costs proposed

164 

  Concepts and Tools from Accounting $

%

Revenue ($160 – 10%)

$144

100

– Variable Costs ($120 – 15%)

$102

 71

= Contribution Margin [CM]

$ 42

  29

Break-Even Units = Fixed Expenses + Target Surplus / Unit CM = $25,000 + $1,000/$42 = 619 participants PROOF: Contribution Format Income Statement Total Number of Participants Served

Unit Participant Costs

%

619

Revenue (Participant Fees)

$89,136

$144

100

– Variable Costs

$63,138

$102

 71

= Contribution Margin [CM]

$25,998

$ 42

 29

– Fixed Costs

$25,000

= Target Surplus a Figure

 $   998a

due to rounding differences. A partial participant cannot be served.

  FIGURE 7.8    Revised break-even analysis with fee reduction, variable cost reduc‑ tion, and target surplus.

TABLE 7.4.  Surplus When Current Fixed Costs Are Fully Optimized Contribution Format Income Statement Total Number of Participants Served

Unit Participant Costs

%

1,000

Revenue (Participant Fees)

$144,000

$144

100

– Variable Costs

$102,000

$102

 71

= Contribution Margin (CM)

$ 42,000

$ 42

 29

– Fixed Costs

$ 25,000

= Surplus

$  17,000



Cost and Management Accounting

165

earlier (see Figure 7.8) was simply an assumption to see what the end result would be if implemented, decision makers would then need to figure out how to make this cost saving a reality. Lowering fixed costs will lower the break-even point. However, increasing fixed costs could also be beneficial, as this may permit service to a larger clientele, which may be sufficient to offset the increased fixed costs. Finally, cost-volume-profit analysis is also useful for forecasting and can provide competitive leverage when applying for funding by showing how many participants can be served with the funding received (Persaud, 2020). Because it can be set up in spreadsheet apps such as Excel, it will take mere seconds to vary assumptions and observe the effect to the bottom line of income earned, all without suffering the negative consequences of making the wrong financial choices.

Costs RELEVANT COST ANALYSIS Program administrators in charge of nonprofit organizations are confronted with many of the same concerns as managers in manufacturing or production. These include optimizing use of available resources by figuring out which services (or products) to introduce, continue, expand, reduce, or close out. Determining the best combination of services to offer is especially important to maximize program outcomes within budget and other resource constraints. By now you probably recognize that any decision involving choices requires scrutiny of the costs versus the outcomes of each alternative being considered. Decisions of this nature are aided by differential analysis. Differential analysis isolates relevant (or avoidable) from irrelevant (or unavoidable) costs and benefits. After separating these two types of data, decision makers can focus on just the information that can influence their choices (Albrecht et al.,

Benefits

Types of decisions facilitated by differential analysis: • •

• • •

Keeping or dropping a product or service. Making a product or doing a procedure internally versus purchasing or outsourcing. Fulfilling a special order. Utilization of a constrained resource. Selling as a by-product or as a refined product. Requires separation of: RELEVANT (avoidable) and IRRELEVANT (unavoidable) COSTS AND BENEFITS NOTE Costs relevant in one decision may be irrelevant in another decision

166 

  Concepts and Tools from Accounting

2011; Garrison et al., 2017; Persaud, 2020). By ignoring data that have no influence on the choices under consideration, decision makers reduce the likelihood that their thinking will be clouded by sunk costs, future costs, and benefits that are the same for alternatives, and more. The remainder of this section examines several variants of differential analyses.

Keeping or Dropping a Service Decisions pertaining to whether to keep or discontinue a service or product can be quite complicated and need to be carefully analyzed. Consider the following hypothetical scenario. The Community Enhancement Foundation was established in April 2020 in response to the COVID-19 global health pandemic. The mandate of this foundation is to provide essential services to residents residing in ABA City. All services are administered from a small building rented by the foundation. Three services are currently being offered by the foundation: „ Program 1: Grocery shopping. This service takes orders for grocer-

ies and delivers groceries once a week to households registered for the service. „ Program 2: Meals on Wheels. This service delivers a hot dinner daily

to seniors who live alone. Entrees and sides for meals are subcontracted from an outside supplier; meals are assembled at the foundation. „ Program 3: Home tutoring. This service provides 4 hours of online

tutoring to children ages 4–15 years, Mondays through Fridays. Tutoring for children ages 4–10 years takes place between 8:00 A.M. and 12:00 noon. Tutoring for children ages 11–15 years takes place between 1:00 and 5:00 P.M. The board of directors of the foundation has expressed concern that the home tutoring program has recorded a loss of $7,100 for the first 6 months of operation. The president of the board has asked for additional information to determine whether this program should be continued for the next 6 months. Data pertaining to the foundation’s operations for the first 6 months are shown in Table 7.5. Contextual information pertaining to fixed costs are as follows: „ Salaries, general overhead, and delivery services are unique to each individual program and will be saved if a program is discontinued. „ Rent is currently allocated to the three programs based on square footage of space occupied by each program. The current space has been

Cost and Management Accounting 

 167

TABLE 7.5.  Income Statement April to September 2020 Grocery Shopping $

Meals on Wheels $

Home Tutoring $

Total $

50,000

  58,240a

144,000

252,240

- Variable Expenses

10,000

  36,400b

  5,000

 51,400

= Contribution Margin

40,000

21,840

139,000

200,840

Salaries

15,000

12,000

126,000

153,000

General Overhead

 1,000

 1,000

  5,100

  7,100

Rent

 4,000

 4,000

 15,000

 23,000

Delivery Services

 4,000

 1,200

      0

  5,200

Total Fixed Expenses

24,000

18,200

146,100

188,300

= Surplus (Loss)

16,000

3,640

  (7,100)

  12,540

Revenue

- Fixed Expenses

a

26 weeks × 7 days × 20 seniors = 3,640 meals @ $16.00 per meal = $58,240

b 3,640

meals @ ($9.00 meal price + $1.00 for containers etc.) = $36,400

leased for a year. The landlord is renting the building as one unit. If a particular program is discontinued, this will not affect rental costs. Costs will simply be reallocated to the other two programs. Given this contextual information, a revised analysis for the home tutoring program is shown in Figure 7.9. As the analysis indicates, the home tutoring program is making a positive contribution to overall income of the foundation. It should therefore be continued, as $7,900 in contribution margin would be lost if it is discontinued.

Make‑or‑Buy Decision Make-or-buy analyis is concerned with whether it is cheaper to do something internally or to purchase or subcontract it. It could involve a decision for an entire product or service or a specific component of a product or service. Continuing with the foundation example, deliveries of subcontracted food items for the Meals on Wheels program have been late on several occasions, impeding on-time food delivery to senior citizens. During a recent review, many did not receive their dinners until 8:00 P.M., which is unsatisfactory, as most of these seniors have to take medications every

168 

  Concepts and Tools from Accounting $ Contribution Margin Lost if Home Tutoring Program Is Discontinued

(139,000)

Give up

Fixed Costs Saved • Salaries

126,000

• General Overhead

  5,100

Net Disadvantage of Dropping Home Tutoring Program

131,100   (7,900)

Save Income Lost $7,900

The home tutoring program should be continued. It is generating a positive program margin and providing a valuable service to residents of ABA City.

  FIGURE 7.9    Analysis of the home tutoring program.

6 hours with meals. The board of directors would like to know whether meals could be entirely prepared, not just assembled, in-house. If the meals are prepared entirely in-house, additional monthly fixed costs will be incurred for two cooks ($900 each) and rental costs of $2,000 will be incurred for the second half of the year for two electric stoves, two microwaves, and a refrigerator. In-house meal preparation will increase electricity costs by $500 per month. In addition, variable costs of $5.00 per meal will be incurred for groceries. The meals are currently purchased from an outside supplier for $9.00 per meal. All current fixed costs shown in Table 7.5 will still be incurred and are therefore irrelevant to the decision. An analysis of the relevant costs for preparing the meals in-house is presented in Table 7.6. It shows that it is cheaper to purchase the meals rather than prepare them internally. Suppose residents of ABA City have expressed an interest in the foundation continuing the Meals on Wheels program after COVID-19. If the foundation is contemplating operating the program for a few years (say 5 years, or 60 months), they may wish to purchase their own kitchen equipment and write off the cost of the assets equally over this time period. For example, if the equipment cost $3,000, $50 would be written off monthly. Circling back to Table 7.6, the relevant costs for in-house meal preparation for the 6-month period would become Current In-house Cost $34,000 – Equipment Rental $2,000 + Depreciation of Equipment $50 @ 6 months = $32,300

Cost and Management Accounting 

 169

TABLE 7.6.  Relevant Costs in Make-or-Buy Decision Make Meals $ Variable Costs (3,640a meals @ $5 per meal) Fixed Costs Cooks (2 @ $ 900 per month × 6 months) Rental of Kitchen Equipment (6 months) Electricity ($500 per month × 6 months)

Buy Meals $

18,200

10,800  2,000  3,000

Cost for Subcontracting Meals (3,640 meals @ $9 per meal)

32,760 34,000

32,760

Based on the analysis, it is cheaper to purchase the meals. a 26

weeks × 7 days × 20 seniors = 3,640

Using this strategy, it is worthwhile for the foundation to invest in its own kitchen equipment and produce the meals internally, as it will cost $32,300 to produce internally versus $32,760 to purchase the meals.

Special Order A special order is a one-off order for a product or service that can be undertaken when an organization has excess capacity. The price charged for a special order is usually different from the regular price. Most special orders require consideration of only price charged in relation to variable costs. Fixed costs are not usually affected, except in exceptional circumstances. For instance, if a company producing emblem products had to purchase a special tool to stamp the emblem for the special order, this fixed cost would be relevant and be part of the cost consideration. Suppose the lone hospital in ABA City has its own state-of-the-art medical laboratory. Currently, the hospital only offers services to its patients. The only other laboratory in ABA City recently contacted the hospital to see if they would be able to accommodate a special order for 2,000 COVID-19 tests, as one of their technicians is overseas and unable to return home. The accounting staff at the hospital have been asked to prepare analyses to see whether the hospital should undertake this job. After speaking with hospital laboratory personnel, they have prepared the analysis shown in Table 7.7. The hospital will charge the lab the same price that it currently charges its patients ($165). The variable costs that will be

170 

  Concepts and Tools from Accounting TABLE 7.7.  Special Order for 2,000 COVID-19 Tests Per Unit $

Total $

Incremental Revenue (2,000 tests)

165

330,000

Incremental Costs Test Kits for COVID-19 Tests Direct Labor for COVID-19 Tests

 80  85

(160,000) (170,000)

Incremental Income

0

The hospital will break even if this special order is undertaken. However, since this is a major health pandemic and capacity is available to fulfill this order, they decide to fulfill the special order.    If a small profit is desired, the hospital can price this special order at a higher price since these are outpatient tests. For instance, if a price of $170 is charged, the incremental income would be $10,000 instead.  

incurred are $80 for the test kit and $85 for direct labor. As shown in Table 7.7, the hospital will not make any profit on this special order but will fulfill the special order in light of the seriousness of the pandemic.

Utilization of a Constraint This decision involves examining the product or service mix that would optimize profit when a particular resource (e.g., machine hours, labor hours, materials) is limited (Garrison et al., 2017). Consider the following scenario. ABA City Hospital laboratory has a time constraint in Week 40 and wants to determine whether it should focus on cholesterol tests or diabetes tests for that week. Participants pay $10 for a cholesterol test and $15 for a diabetes test. The variable costs for a cholesterol test and diabetes test are $5 and $9, respectively. The constraint is 2,400 minutes of technician time. The cholesterol test takes 2 minutes, and the diabetes test takes 5 minutes. Average demand per week is 800 cholesterol tests and 1,000 diabetes tests. The laboratory would like to determine the optimal service mix to generate the highest income for Week 40 (see Table 7.8). The process to determine the optimal product or service mix involves three steps: 1. Find the contribution margin for each product or service. 2. Find the contribution margin per constraint for each product or service and rank highest to lowest. 3. Calculate the optimal product or service mix to optimize profit.

Cost and Management Accounting 

 171

TABLE 7.8.  Utilization of a Constrained Resource Cholesterol Test $

Diabetes Test $

Revenue – Variable Costs

10.00  5.00

15.00  9.00

= Contribution Margin Per Test

 5.00

 6.00

Cholesterol Test $

Diabetes Test $

Contribution Margin Per Test ÷ Time Required Per Test (minutes)

5.00 2

6.00 5

= Contribution Margin Per Minute

2.50 (1)

1.20 (2)

(1)

(2)

(3) Weekly Demand for Cholesterol Tests × Time Required Per Test

800 tests     2 minutes

= Total Time Required for Cholesterol Tests

1,600 minutes

Total Time Available – Time Used for Cholesterol Tests = Time Remaining for Diabetes Tests ÷ Time Required for Diabetes Tests

2,400 minutes 1,600 minutes   800 minutes     5 minutes

= Number of Diabetes Tests

160 tests

The optimal mix for Week 40 is 800 cholesterol tests and 160 diabetes tests.  

According to Table 7.8, the optimal product mix for Week 40 for ABA City Hospital is to undertake 800 cholesterol tests and 160 diabetes tests. This will provide a contribution margin of $4,960 (800 cholesterol tests × $5.00 + 160 diabetes tests × $6.00).

Sell or Process Further Sell or process further involves determining whether a product or service should be sold as a by-­product or as a fully refined product or service. For example, if Community Enhancement Foundation decides to operate the Meals on Wheels program for 5 years and prepare the meals in-house, they may wish to analyze whether it would make more sense to sell uncooked meals or fully cooked meals to senior citizens. If uncooked meals are sold, cooks and stoves would not be required. Moreover, as is done commercially in some countries (e.g., Blue Apron), an entire week of meals could be delivered at once, thus saving on delivery charges.

172 

  Concepts and Tools from Accounting

SUMMARY This chapter stresses that, in an environment of rising costs, budget cuts, and increasing demand for human services, operational efficiency is central to maintaining competitiveness, meeting participants’ needs, securing renewed program funding, and ensuring continuity, sustainability, and survival of programs in an increasingly challenging and complex environment. Program evaluators and administrators need to capitalize on the tools that can provide insight into how to achieve greater cost efficiency and cost optimization. Today, it can no longer be business as usual. Program evaluators must rise to the challenge and go the extra mile by showing program administrators how they can do more with less. Program administrators, for their part, also need to be familiar with cost and management accounting tools so that they, too, can use these tools internally or, at a minimum, can understand what the program evaluator is recommending. The chapter examines several cost and management accounting concepts and tools using illustrative examples. Specifically, it focused on: 1.  Cost behavior, which is concerned with how fixed and variable costs behave within the relevant range. It highlighted that optimization of fixed costs leads to lower overall per-unit costs, which is powerful information in decision making. 2.  The relevant range is an important concept that affects cost behavior. This range sets the boundaries within which existing program operations can be executed without an increase in operational costs. 3. Activity drivers (e.g., number of students, patients in a hospital) triggers change in costs. Understanding this concept can help with reducing or eliminating unnecessary costs and is also helpful for determining whether particular services should be outsourced or done internally. 4. The organization’s cost structure, that is, its proportion of fixed costs in relation to variable costs, is important for strategic planning. Organizations with higher fixed costs will experience considerably more vulnerability during economic downturns. This can lead to program closings. 5.  Break-even analysis shows the point at which a program is neither making a gain nor a loss; that is, operational costs are exactly equal to service fees received. Programs need to be operating above the break-even point to ensure sustainability and continuity. 6.  Cost-­volume-­profit analysis is important for helping nonprofits to understand how changes in one or more parameters such as operational costs or service fees will affect a program’s financial well-being.

Cost and Management Accounting 

 173

7.  Relevant cost analysis is important to determine the best combination of services to offer and how to optimize program outcomes within budget and other resource constraints. Relevant cost analyses are particularly useful for determining whether to keep or drop a service, whether to offer or outsource a service, whether to engage in a one-time special order for a particular service, how to utilize a constrained resource, and whether to engage in further processing of a particular service.

174

CONCEPTS AND TOOLS FROM ACCOUNTING

DISCUSSION QUESTIONS

(1) Program A has annual fixed costs of $150,000 and variable costs per partici‑ pant of $250. The program is currently serving 200 participants but has the capacity to serve 300 participants without incurring additional fixed costs. (a) Recreate the template provided under Discussion Question 1(c) in Chapter 2. Using the information above, perform calculations with 200, 250, 275, and 300 participants. (b) What are you observing with respect to average participant costs as your fixed costs are optimized? (c) Discuss as a class how this information can help decision makers. (2) Read the section on program cost structure. Choose a program that you are familiar with and discuss in class the pros and cons of the following cost struc‑ tures. Assume that there is a global economic recession that is projected to last 5 years. (a) Fixed Costs: Variable Costs = 70:30 (b) Fixed Costs: Variable Costs = 50:50 (c) Fixed Costs: Variable Costs = 30:70 (3) As a class, identify 10 different programs in different sectors. (a) For each program, identify two cost drivers and discuss how these cost drivers affect program costs. (b) Next, pick two of the identified programs from two different sectors. Dis‑ cuss in class how decision makers can use the cost drivers identified for your two programs for strategic decision making. (4) Program B must earn a surplus of at least $20,000 to be sustainable. Annual fixed costs are $80,000, and variable costs per participant are $500. Partici‑ pants pay a fee of $800 to use the program services. (a) Using Figure 7.6 as a guide, calculate the break‑even point in participants for Program B. Round your figure to the nearest whole number, as it is not possible to serve a partial participant. (b) Calculate the break‑even point in dollars for Program B. (c) Proof your calculations. Note that your proof will not come to exactly $0, as your figure in (a) is rounded. (d) Calculate the annual surplus generated by the program when 500 partici‑ pants are served. (e) Variable costs per participant have increased to $650. However, program

Cost and Management Accounting

administrators do not wish to raise participant fees in this environment. Perform new calculations and advise management if this is a viable option given that the program must make at least $20,000 in annual surplus to comply with its mandate. All other information remains the same. (f) Based on the analysis in (e), discuss in class the options that program administrators have if they do not increase program fees. (g) Since you are no longer intimidated by cost‑inclusive evaluation, perform various cost‑volume‑profit analyses using the options identified in (f) and observe the effect on surplus. Share your insights with your classmates.

175

176 

  Concepts and Tools from Accounting

COST AND MANAGEMENT ACCOUNTING FORMULAS Estimating Total Costs for New Activity Levels (see Figure 7.2)

Y = a + bX where Y = total cost a = fixed costs b = variable costs X = activity level (i.e., number of participants)

Contribution Margin (see Table 7.2)

= Revenue – Variable Costs Break-Even Units (see Figure 7.5)

= Fixed Expenses ÷ Unit Contribution Margin Break-Even Dollars (see Figure 7.5)

= Fixed Expenses ÷ CM Ratio Break-Even Units with Target Surplus (see Figure 7.6)

= (Fixed Expenses + Target Surplus) ÷ Unit Contribution Margin Break-Even Dollars with Target Surplus (see Figure 7.6)

= (Fixed Expenses + Target Surplus) ÷ CM Ratio

PA R T I V

Cost‑Inclusive Evaluation for the Scientist–Manager–Practitioner

C H A P T E R

8

Breaking Down Cost by Activity for Better Cost‑Inclusive Evaluations

I

n the preceding chapters, we applied accounting, economics, and evaluation to constructed models of programs that aid understanding and optimizing relationships between program costs, program activities, and monetary, as well as nonmonetary, program outcomes (cf. Yates, 1980a, 2020). This and the next chapter apply evaluation and economic concepts and methods to deepen understanding of program operations for a more comprehensive evaluation of programs. This understanding can be characterized as an extension of current models combining research and practice in program operations—­what DeMuth and colleagues termed the scientist– manager–practitioner (DeMuth, Yates, & Coates, 1984).

CAPTURE THE ESSENCE OF A PROGRAM: ITS ACTIVITIES In most cost-­inclusive evaluations, activities occupy a central role. Whereas some nomenclatures of program evaluation distinguish primarily between the program as a whole and its outcomes or an evaluation of program “processes” versus an evaluation of program outcomes (e.g., Posavac, 2011), our view of programs discerns important qualitative and quantifiable differences between: 1. the resources used by a program to provide 2. specific activities in which participants are invited to participate to change, so that

 179

180

EVALUATION FOR THE SCIENTIST–MANAGER–PRACTITIONER

3. certain processes operating within the participant (often called the client or patient in human services) are modified, such that 4. outcomes desired by the participant, provider, and perhaps family become more likely (Figure 8.1). Building the resources → activities → processes → outcomes analysis (RAPOA) logic model for the program or programs being studied can be a useful, formative step in a cost-inclusive evaluation. Insights into the theory of change used by program providers and entertained by program participants are commonly generated during delineation of specific relationships between activities, processes, and outcomes. Discussions also may suggest means of maintaining process → outcome relationships but making them less costly by finding those activities that modify key processes in similar ways but with less use of expensive resources such as professionals’ time. RAPOA models can be developed for simpler programs with a flow diagram showing relationships between resources, activities, processes, and outcomes, as shown in Figure 8.2 for a needle exchange program aiming to reduce new HIV infections. For more complex programs, discovering what the activities of the program are supposed to be, according to program managers and guidelines, versus what activities actually occur in the program can be crucial

RESOURCES

ACTIVITIES

PROCESSES

OUTCOMES

a. staff time and expertise b. space c. assessment instruments

1. intake 2. initial assignment 3. diagnosis 4. assignment to the treatment team 5. social skills training 6. ongoing assessment 7. relapse prevention training 8. transition to self-management

A. heightened participant expectation of treatment success B. acquisition of social skills C. acquisition of relapse prevention skills D. acquisition of self-management skills

i. improved psychological functioning ii. heightened sense of self-worth iii. more dates per person approached iv. reduced use of tobacco v. reduced abuse of alcohol

FIGURE 8.1 Resources → activities → processes → outcomes analysis (RAPOA) logic model for a substance abuse treatment program.

Breaking Down Cost by Activity

181

Resources

supervisor in office

Activities: Provider

new syringes available at no cost to users in the context of a trusted community outreach program

Activities: Participant (User)

staff in van

old syringe exchanged by user for new syringe

new syringe used by user

van parked in community

new syringes

old syringe reused by same user

new syringe sold instead

old syringe used by old or new user

Processes

Outcomes

no new infection with this strain of HIV

lower infection with this strain of HIV for users of free needle exchange program

lower HIV rate in the community

old syringe contaminated by HIV

old or new user infected by new strain of HIV

user already infected by the same strain of HIV

higher HIV rate in the community

FIGURE 8.2 RAPOA model for evaluation of a needle exchange program.

182 

  Evaluation for the Scientist–Manager–Practitioner

for understanding what made targeted outcomes more likely or less likely. Activity delineation is particularly important for evaluations that hope to advise programs on how to control costs of achieving targeted outcomes. Subsequent to developing a resource × activity matrix, a comprehensive cost-­inclusive evaluation creates activity × process and process × outcome matrixes. These allow causal connections to be made explicit, from specific resources consumed by the program to specific outcomes generated by the program. These matrixes can be developed via focus groups and other qualitative methods in meetings with program staff, as detailed in Yates (1999). Qualitative or quantitative research then can refine, and sometimes correct, matrixes describing active relationships between resources and activities, activities and processes, and finally processes and outcomes of the program. Although the cost-­ inclusive evaluator can begin by asking interest group representatives to begin a logic model by listing in separate columns the major resources, activities, processes, and outcomes of a program, as shown in Figure 8.1, our experience is that most representatives balk at this task. Beginning with the elements of the model in which an interest group is most familiar or interested can create a more constructive atmosphere and more engaged participants. Most interest groups, including providers and participants as well as funders, find it easier to start by explaining what they do (i.e., their activities in the program) and what it takes to do that (i.e., the major resources consumed in each activity). As lists of resources and activities are refined by an evaluation in additional meetings with interest groups, participants begin to describe how different activities are enacted by program providers depending on the context in which the program operates and the needs and abilities of individual participants. The choice of activities often depends on environmental factors, but the choice is made by providers so that positive changes in processes inside participants are more likely to produce desired program outcomes, such as enhanced social and occupational skills, improved self-­efficacy expectancies, and alternative and more positive constructs of self—for example, as recovering addict rather than hardened criminal or a self-­manager rather than out-of-­control person.

DEVELOP A RESOURCE × ACTIVITY MATRIX TO CHARACTERIZE AND ANALYZE THE PROGRAM Major steps in developing a resource × activity matrix involve asking representatives from the major groups interested in an evaluation (participants and their advocates, participants’ family, providers, managers, funders, and the community) to do the following:

Breaking Down Cost by Activity 

 183

1. identify the major activities of the program with labels and definitions for each, 2. identify the major resources used by the program, again with labels and definitions for each, 3. provide feedback regarding the completeness of matrixes that list resources in rows and activities in columns (or with rows and columns transposed if there are far more activities than resources), and 4. estimate the amount of each major resource used by the program to make each activity possible. Because interest group representatives commonly report that the amount of resources, such as provider time, can vary considerably for the different providers, Step 4 asks stakeholders for not only “best guess” estimates of resource amounts used by the activity but also “highest common” and “lowest likely” estimates. If processes and outcomes are to be examined in an evaluation as well, this is a good time to ask different representatives to also identify those processes that are modified via different program activities to achieve the outcomes the stakeholder also identifies as important for the program. The four steps in constructing a resource × activity matrix are discussed below with examples.

LIST AND DEFINE THE MAJOR ACTIVITIES OF THE PROGRAM Developing a resource × activity matrix (e.g., Table 8.1) is an often-­positive effort that requires several iterations before most interest groups are satisfied with the result. Typically, one begins constructing the matrix by listing the major activities of the program and only later listing major resources used by the program. Most operators, providers, and participants enjoy explaining what they do. For some programs, activities are listed in proposals for funding, more specifically in individualized treatment plans, or in “Method” sections of research reports. If they are not, interest groups can simply name or describe the activities they routinely offer or participate in. This can take several hours of focus groups or individual interviews. Activities lists can oftentimes become too long; these need to be summarized so that only 5–10 activities are listed, to prevent further analyses from becoming unwieldy. Arriving at consensus on what to call these activities and how to describe them so they can be recorded reliably and validly is a constructive result of this first step in understanding relationships between resources and activities within a program.

184 

  Evaluation for the Scientist–Manager–Practitioner

TABLE 8.1. Resource × Activity Matrix for Time and Expertise Resources Other:         

Transition to SelfManagement

Relapse Prevention Training

Ongoing Assessment

Social Skills Training

Assignment to Treatment Team

Diagnosis

Initial Asessment

Resources ↓

Intake Interview

Activities →

Staff MD Psychiatrist PhD Clinical Psychologist LCSW Practitioner BA Paraprofessional Graduate Intern Undergraduate Intern Participants and Participant-Related Employer Employee Mother Father Sibling Other:          

LIST AND DEFINE THE MAJOR RESOURCES USED BY THE PROGRAM When asking stakeholders to identify the major activities of a program, time of providers, participants, and administrators often is the major resource discussed. Space, however, along with equipment, material, and communications services, often are additional resources without which major program activities may be impossible. Transportation of participants to program delivery sites or of providers to participants also may be crucial for some activities. These can all be listed in the emerging resource × activity matrix, as illustrated in Table 8.2.

Breaking Down Cost by Activity 

 185

TABLE 8.2. Resource × Activity Matrix for Program Space Resources Other:         

Transition to SelfManagement

Relapse Prevention Training

Ongoing Assessment

Social Skills Training

Assignment to Treatment Team

Diagnosis

Initial Assessment

Resources ↓

Intake

Activities →

Space Offices for Individual Counseling Group Meeting Rooms Administrative Areas Common Space for Direct Service Common Space for Indirect Service  

Discussions about which activities or resources are sufficiently important to record can be involved but enlightening. For example, a question posed to the evaluator about whether the presence of and expense associated with a particular resource—­a coffee machine at a drop-in center for participant-­operated mental health services—­was responded to with “Would the services work similarly or better if the coffee machine was not there?” The evaluator learned that neither providers nor participants could conceive of program operations without coffee. This illustrates how the result of resource × activity model building in cost-­inclusive evaluation can lead to better understanding of the program by many.

CAPTURE ACTIVITY OCCURRENCE, FREQUENCY, AND INTENSITY Recording occurrence, duration, sequencing, and other characteristics (parameters) of program activities often is a greater impediment than getting endorsements to record “what’s going on” in a program. For cost-­ inclusive evaluation to be useful, activity records need to include not only the descriptors of an activity but also what resources were used to enact and maintain that activity. One can quickly check off a series of activity

186 

  Evaluation for the Scientist–Manager–Practitioner

parameters, recording, for instance, who did what with whom, with what and where, for how long or how intensely (see Table 8.3). These check-offs could be done using small portable instruments, including smartphones or watches that would allow near-­instant recording, which need not interfere with program operations. Although activity recording systems may be available in generic forms, we advise tailoring the recording system to the program. In particular, we recommend involving program providers and participants in designing and pilot testing recording systems. This will not only inform the system better but can improve buy-in to the recording system, as well as an evaluation more generally. Yates (the second author) found, for example, that the recording system for use of volunteered and donated resources that he developed hand-in-hand with a participant and program manager was the only recording system successfully adopted in a multiyear project by most sites! Pilot testing the system, redesigning it, and repeating the refinement cycle (Figure 8.3) should fine-tune it. This, too, was what worked well for Yates in a multiyear, multisite research project (Yates et al., 2011).

ACTIVITIES PLANNED VERSUS ACTIVITIES IMPLEMENTED Most programs have specific plans for operations, that is, for the nature and sequence of program activities to be performed by providers with participants. These plans can be a rich source of information on the activities that are supposed to occur in the program and how contextual factors may moderate and determine activity choice. Often one or more theories that assemble principles from psychology, sociology, anthropology, medicine, and other disciplines are used to explain why a program performs particular activities with its participants. The literature review or method sections of contract and grant proposals, as well as research reports, often provide basic explanations of the “theory of the program” (more often termed program theory; Rogers, Hacsi, Petrosino, & Heubner, 2000), which justifies and guides selection and sequencing of program activities. For example, Martens, Smith, and Murphy (2013) designed brief activities to reduce college student drinking and related problems. They did either of the following: 1. informed students who drank excessively that, although they perceived their drinking as below the normal, it actually was well above the average for similar students, or 2. taught students specific strategies for reducing alcohol consumption.



 187

Employer #    

{ Diagnosis

{ Social skills training

{ BA paraprofessional

{ Graduate intern



{ Other:       

{ Sibling

{ Father

{ Mother

{ Employee

{ Employer

{ Other:       

{ Transition to selfmanagement

{ Relapse prevention training

{ Undergraduate intern { Ongoing assessment

{ Assignment to treatment team

{ LCSW practitioner

Parent #    

{ Initial assessment

{ PhD clinical psychologist

Partner #    

Student #    

Teacher #    

Employee #    

Participant #    

{ Intake

{ MD psychiatrist

with whom

did what

who

{ Other 3:       

{ Other 2:       

{ Other 1:       

{ Clothing

{ Phone

{ Computer

{ Shop tools

{ Public transport

{ Vehicle

{ Art supplies

{ Kitchen utensil

and with what

{     min. {     (1–10) intensely {     (other parameter:       )

{ Home { Work { Class

{ Range

{ Car

{ Subway

{ Bus

{ Office

for how long or how much?

and where

TABLE 8.3.  Instrument for Recording Who–What–Whom–What–Where–How Much Parameters of Resource Use for Program Activities

188

EVALUATION FOR THE SCIENTIST–MANAGER–PRACTITIONER Initial Design Involve Evaluators Involve Funders

Cycle of Instrument Design Involve Managers

Involve Providers

Involve Participants

FIGURE 8.3 Refinement cycle for developing instruments for recording resource → activity, activity → process, and process → outcome relationships.

Each intervention was translated into specific acts that could be scored as occurring or not occurring. Interventions for a randomly selected fifth of participants were scored by observers as either not occurring, occurring, or being delivered at a level above expectations of program operators. Both interventions were implemented at high levels of fidelity to expectations. Intriguingly, only the first intervention was associated with better outcomes (significantly less drinking) than those found for control activities (basic education about alcohol). Neither intervention actually reduced problems related to alcohol consumption, however, raising questions about the hypothesized activity → process → outcome linkages.

USING RESOURCE × ACTIVITY MATRIXES TO IMPROVE PROGRAM COST-EFFECTIVENESS Even at this early phase of a cost-inclusive evaluation, the program may gain insights that can aid management of resources. Cost-inclusive evaluation can describe the resources used for each activity in a resource × activity table such as that shown in the top panel of Table 8.4. Even a relatively simple matrix specifying the types and amounts of resources used for program activities can inspire ideas for program enhancement that could reduce costs, improve outcomes, or both. As suggested by Table 8.4, examination of a resource × activity matrix may help

Breaking Down Cost by Activity 

 189

identify possible cost savings. Consider increasing paraprofessionals’ role in counseling, for instance, moving the clinical psychologist’s time more to supervision and assessment of fidelity with training guidelines and less to direct services such as social skills training (as shown in the top panel of Table 8.4, “Before Cost-­Inclusive Evaluation,” and the bottom panel, “After Cost-­Inclusive Evaluation”). Total staff hours and direct service hours to participants (“Providing Social Skills Training to Participants”) are unchanged, but the more expensive hours of the clinical psychologist are replaced by the less expensive hours of the paraprofessionals. Of course, this assumes that social skills training provided by paraprofessionals would be equivalent to social skills training provided by clinical psychologists. Research does support this assumption (e.g., Armstrong, 2010). Also, by multiplying pay rates (such as $100 per hour) by the hours shown in the “Before  .  .  . ” and “After  .  .  . ” matrixes in Table 8.4, the

TABLE 8.4. Resource × Activity Matrix Before (Top) versus After (Bottom) Reallocating Resources among Activities to Reduce Costs Before Cost-Inclusive Evaluation

Program Activities for Activity #5: Social Skills Training Providing Social Skills Training to Participants

Assessing Fidelity to Training Guidelines

Total

20

 1

31

Paraprofessional 1

25

 5

30

Paraprofessional 2

10

 5

15

55

11

76

Cell Content: Average . . . Supervision of Hours per Week Providers of Social Skills Training

Program Resources

. . . Clinical Psychologist

Total After Cost-Inclusive Evaluation

10

10

Program Activities for Activity #5: Social Skills Training Providing Social Skills Training to Participants

Assessing Fidelity to Training Guidelines

Total

 2

 6

20

Paraprofessional 1

34

 2

36

Paraprofessional 2

19

 1

20

55

 9

76

Cell Content: Average . . . Supervision of Hours per Week Providers of Social Skills Training

Program Resources

. . . Clinical Psychologist

Total  

12

12

190 

  Evaluation for the Scientist–Manager–Practitioner

actual savings could be estimated for the cost-­inclusive evaluation. Actual impacts on outcomes of changing hours spent by different staff members in this particular setting also would be important to measure. The idea is to build, via systematic data collection, a comprehensive model of resource → activity and other relationships for the program as it functions in its particular context.

SUMMARIZING RESOURCES → ACTIVITIES FINDINGS, QUANTITATIVELY AND QUALITATIVELY Roles for Quantitative and Qualitative Methods in Cost‑Inclusive Evaluation Many evaluators and evaluations call for a mixture of quantitative and qualitative evaluation. Mixed methods evaluation (e.g., Mertens, 2010) are often specified in solicitations for grant and contract proposals. As noted by Rogers et al. (2009) and as illustrated by Gorman, McKay, Yates, and Fisher (2018), cost-­inclusive evaluations can and should include subjective judgments as well as objective numeric indices of each resource consumed, activity enacted, process inspired, and outcome achieved. More valid and useful cost-­inclusive evaluations should result from: 1. using objective measures to validate and, when necessary, correct or augment subjective judgments, 2. using subjective judgments and insights to avoid measuring precisely what is least important, and 3. involving a variety of interest groups to ensure that important elements are included in both quantitative and qualitative models of the service program.

Discover Associations between Resources Used and Activities Performed Table 8.5 shows the sort of resource × activity matrix that can facilitate both evaluation and improvement of a human services program. In the matrix, columns specify basic program activities: phoning, computer use, transportation, peer counseling, singing (yes, singing, as reported by providers). In the same matrix, rows identify resources needed to implement those activities: people’s time, space, and “everything else” (that is as specific as the program wished to be). Note that, for this program, separate subrows list separate “paid for” and “volunteered” resources. Examples are provided of the “paid for” and



 191



Everything else (that was devoted to an activity for this participant)

Space (devoted to an activity for this participant)

Volunteered

Paid for

Donated

Paid for

Volunteered

Tutorial (0.75 hours x 2 times weekly)

3 times per week

   hours on $1,000 computer

Peer counseling (e.g., introduce program and activities)

Van for    miles @ $  /mile

20 feet × 30 feet room at clinic

12 feet × 23 feet @ $0.65 per square foot

1.5 hours per week 3 times per week, 0.5 hours @ $6.50 per hour @ $7.00 per hour +7 bus passes @ $2.00

Transportation Computer use (e.g., email with family, to find (e.g., helping a new participant move job, housing belongings)

Activities (sampled every month for a randomly selected 2 out of 10 participants)

6 feet × 10 feet 4 feet × 4 feet @ $0.65 per square foot @ $0.65 per square foot

Once weekly for 0.15 hours @ $7.00 per hour

Phoning (e.g., with Resource family, to arrange job paid for or volunteered interview) or donated?

People’s Time (hours) Paid for (devoted to an activity for this participant)

Resources (based on estimates for the average activity for the regular participant)

For Participant [ID #] at [program site] for   /   through   /  /  

1 guitar @ $120

1 hour per week teaching

Singing (for fun, improved friendships, emotional support)

TABLE 8.5. Resource × Activity Matrix Describing a Participant-Operated Mental Health Services Program

192 

  Evaluation for the Scientist–Manager–Practitioner

“volunteered” resources in the labels for each subrow to operationalize resources in ways that most stakeholders understand. Each intersection of a column and a row of the matrix indicate the amount of resource needed for the corresponding activity. Empty matrix cells indicate that none of that particular type of resource was used for that particular activity. Faithfully and accurately recording the activities offered by providers and engaged in by participants can inform program managers, participant advocates, and others about the fidelity of program implementation. The more detailed the information recorded regarding specific program activities is, the more fine-­grained is the match between what actually occurred and what should have occurred. These tallies also provide a foundation for detailed, activity-­level evaluation of program costs. To reduce the likelihood that evaluation of activities provided or not provided becomes potentially iatrogenic for a program, we note that a mismatch between planned and implemented activities need not be negative. Providers commonly adapt program plans to fit the cultural and other specific needs of a participant at a particular moment in time. Providers also take into consideration the time and other resources that are and are not available at the moment. For example, in substance-­misuse programs, providers do not always spend a planned 50 minutes with each participant. Sometimes participants are encountered on the street, in the classroom, in the workplace, or while waiting for a dose of medication to reduce urges to use a drug. A skillful provider tailors program activities to the circumstances in which they must perform (APA Presidential Task Force on Evidence-­Based Practice, 2006). Flexibility in implementation of program activities may be the hallmark of particularly good and potentially quite effective treatment.

Qualitatively Summarize Resource → Activity Findings Involving program providers in program evaluations can be challenging. One method we have found helpful for involving providers in cost-­inclusive program evaluation is to cast them in the role of experts in understanding, describing, and even estimating the amounts of resources, activities, processes, and outcomes needed in program models. This is helpful because it is true! Providers readily understand the distinction between activities performed to engage and change participants versus the resources used to perform those activities. So we usually begin cost-­ inclusive evaluations by asking program staff members to list the key activities they typically perform, followed by the key resources consumed when performing those activities. After explanation of the difference between processes and outcomes, program providers usually list both, often moving them from one column to another as distinctions between

Breaking Down Cost by Activity 

 193

processes and outcomes are better understood by staff members (and by us evaluators). For example, in a residential substance abuse treatment program, Dorothy Lockwood-­Dillard, in collaboration with Yates, worked for an afternoon with program providers to list: 1. the primary activities of the program, 2. the principal resources consumed by the program to conduct those activities, 3. the key processes internal to participants that were targeted for change by the primary activities of the program, and 4. the targeted outcomes, both immediate (proximal) and delayed (distal), that were hoped to occur as a result of changes in processes that were caused by program activities that were, in turn, made possible by program resources (as detailed in Yates, 1999). The specific items described by staff for each of these elements of the program model are listed in Table 8.6, using labels developed by Lockwood-­ Dillard and program staff for specific resources, activities, processes, and outcomes. Three matrixes resulted from juxtaposing in rows and columns those items listed as resources and activities, activities and processes, and processes and outcomes. Each pairing represented a different set of relationships for evaluation. For example, the resource × activity matrix asked providers, participants, and other stakeholders to describe the nature of specific relationships between resources listed in the leftmost column of Table 8.6 and the activities listed in the second column from the left. After some discussion, these relationships often can be characterized by a few words, and often numbers. For example, the relationship between the resource “direct service staff” and the activity “group counseling” could be described as the proportion of time usually spent by those staff members in that activity during a typical week at the program—­perhaps 18%—after some discussion and consensus building. The cost of that time could be estimated by applying the 18% to the total salaries and benefits for those staff members for a typical week: say, $2,500 × 18% = $450. This information would then be entered in the matrix shown in Table 8.7, specifically in the cell designated by the Direct Service Staff row and the Group Counseling column. Similar discussions and arithmetic can describe most resource → activity relationships. Estimates of the existence and strength of activity → process and process → outcome relationships also can be generated in an afternoon of group discussion, as shown by Lockwood-­Dillard’s work with Yates (cf.

194 

  Evaluation for the Scientist–Manager–Practitioner

TABLE 8.6.  Logic Model for Substance Abuse Treatment Program Resources

Activities

Processes

Outcomes

Direct service staff

Group counseling

Self-efficacy expectancies

Complete abstinence from drugs

Administrative staff

Relapse prevention training

Relapse prevention skills

Stable employment

Facilities

Individual counseling

Support access skills

Avoidance of all criminal behavior

Utilities

Case management

Service access skills

Compliance with probation and parole

Support staff

Bonding with addicts and ex-offenders

Supplies

Bonding with counselors

Urine drug tests Source: Yates (1999).

TABLE 8.7. Resource × Activity Matrix for Substance Abuse Treatment Program Activities →

Resources ↓ Direct service staff Administrative staff Facilities Utilities Support staff Supplies Urine drug tests Source: Yates (1999).

Group Counseling

Relapse Prevention Training

Individual Counseling

Case Management

Breaking Down Cost by Activity 

 195

Yates, 1999). Specific methods for quantifying activity → process and process → outcome relationships are described in the next chapter.

ASSESS RELIABILITY AND VALIDITY OF RESOURCE → ACTIVITY FINDINGS Information on relationships between the resources a program consumes and the activities in which providers engage participants can influence decisions about how to manage and fund programs. This information is obtained through quantitative measurement. The quality of that measurement can itself be evaluated according to common standards of reliability and validity. As detailed in texts on research design (e.g., Kazdin, 2003) and program evaluation (e.g., Posavac, 2011), there are many types of reliability and validity. These criteria for quality of a measure can be applied to qualitative as well as quantitative measures (Peräkylä, 2004). Although the exact definitions of specific types of reliabilities and validities can differ between qualitative and quantitative approaches, persons who have similar perspectives because they belong to the same stakeholder group should be able to agree on the (1) types and amounts of resources used by the program, (2) types and amounts of provider offering of, and participant participation in, specific program activities, and (3) types and amounts of resources used by each activity offered and participated in. After coming to agreement on the types of resources used and activities enacted by the program, different teams of interest group representatives can meet separately to fill in values for the cells of their mutually agreed-­ upon resource × activity matrixes. Reliability of naming of resources and activities and of reports of resource × activity cell amounts then can be measured by common statistics, such as percentage of agreement per cell and average percentage of agreement over all cells. Table 8.8 shows how this might begin, with specific lists of resources (vertically in leftmost column) and lists of activities (horizontally in next to topmost row) for each of the two interest group perspectives depicted in the matrix. Different perspectives on the resources and activities of a program would, of course, create different sets of cells of a matrix for each perspective, to later be integrated into a composite matrix showing all resources and all activities from all perspectives. As illustrated in the rightmost columns of Table 8.8, simple checks on resource × activity cell values can be made by totaling cell values for resources and comparing the total with the total amount of the resource in the budget. Total practitioner hours used by different activities should add up to the total practitioner hours available, for instance. If they do not, practitioner time spent in indirect services may have been under- or overestimated. An even better comparison can be made

196  Service Advertisements

. . . . . .

Planned

Actual

Service Advertisements

Note. Validity of resource lists can be assessed by comparing observations to accounting records.

Total occurrences according to . . .

. . .

. . .

. . .

. . .

. . .

Perceptive #2

Counseling

. . .

. . .

. . .

. . .

. . .

Activity List from Perspective #2

Counseling

Activity List from Perspective #1

Activities

Perspective #1

Facilities

Participants

Total occurrences according to . . .

. . .

. . .

. . .

Staff

Resource List from Resource List from Perspective #1 Perspective #2

Resources

#1

#2

Total According to Perspective

Total Value (Cost)

Budget

Actual Expenditures

Total According to Accounting

Total Value (Cost)

TABLE 8.8.  Inter-Perspective Comparison for Resources, Activities, and Resource × Activity Cell Entries

Breaking Down Cost by Activity 

 197

between totals for each resource row and actual total expenditures of the resource, possibly from accounting records. Although exact matching is unlikely, close correspondence could be anticipated. Also as illustrated in the lower rows of Table 8.8, checks also can be made on agreement between interest group perspectives and records of the occurrence of program activities. Similar to the distinction between the budgeted amount and the actual expenditures of specific resources, the planned and actual (observed) occurrence of specific activities can provide good criteria against which to compare estimates of how often each activity occurs according to each perspective.

When Reliability or Validity of Resource × Activity Matrixes Is Low So, what if the findings are not good? What if different interest groups come up with markedly different lists of resources, or of activities, or of how much of which resources are used for specific activities? Some interest groups will argue that their perspective is intrinsically better or should be considered the “gold standard.” At some time, many interest groups assert that they are “first among equals.” These have, over the history of cost-­ inclusive evaluation to date, included researchers, providers, participants, funders, community representatives, and combinations thereof. The solution we recommend, both because it seems fairer and because it is more likely to work, is to hold a series of discussions to air and possibly iron out these differences in resources consumed, activities offered, and resource → activity relationships. At best, consensus will emerge, and the cost-­inclusive evaluation can proceed. At worst, an evaluation will end up with two or more “bottom lines” that reflect real differences in perspectives on program resources, activities, and resource → activity characteristics. This last result need not be itself considered a failure but instead a reflection of the different realities perceived by different groups. For some human services, radically different perspectives may be needed to capture their complex, multifaceted nature.

RESOURCE COSTING In a Resource × Activity Matrix, How Much of Each Resource Is Used in Each Activity? One’s understanding of resource → activity relationships in a program often is augmented when moving to the next step in cost-­inclusive evaluation: assigning specific amounts of resources to each activity. Acknowledging at the outset that different resources can be used for the same activity

198 

  Evaluation for the Scientist–Manager–Practitioner

can aid later decisions about how to reduce costs of achieving similar outcomes or improve outcomes within current budget constraints. For instance, a PhD-­holding clinical psychologist or a graduate intern could conduct intake interviews, depending on who is and is not available when a new participant presents. The evaluator can simply ask for the most common association of activity and resources, possibly with variation indicated quantitatively within the resource × activity matrix by placing in each cell estimates or even observed probabilities of occurrence (of, say, performance of an intake interview by a clinical psychologist versus by a graduate intern and all other personnel). Estimates of which resources are needed and used in different activities can be gleaned from program plans and prescribed “best practices.” As most practitioners, participants, and many managers know, however, budgeted or planned resource → activity assignments can vary markedly from the resources actually used when the activities are actually performed. Little provides as much realism and validity to an evaluation process as collecting data on what really happens in a program by direct observation and regular participation. Categorizing resources according to their use in specific activities can provide insights, as well as organization, to what otherwise can be a weedlike growth of lists and sublists. For example, all resources that involve providers’ time and skills and that are used in training, counseling, and relapse prevention might be categorized as “direct service personnel.” Although this is not the same as indicating that these resources are interchangeable, this categorization scheme can facilitate recording and analysis of specific resource → activity relationships.

Services, as Well as Resources, Can Be Indirect Categorizing activities is another natural outgrowth of the iterative process of using stakeholder consensus to develop resource × activity matrixes. One can, for instance, include intake, initial assessment, diagnosis, and assignment to treatment team as indirect services, or whatever other label fits the needs of stakeholders. The evaluator may find that certain resources are used primarily for indirect services and that other resources are used primarily for direct services. In certain programs, some categories of activities are more likely to require specialized, licensed, and highly paid personnel, such as direct services in mental health programs. This and other categories, such as administration or management, can spark insights into program structure that can lead to recommendations for reducing costs, improving outcomes, or both. The substitution of less highly paid personnel for direct service described earlier, and increased use of more highly paid and trained personnel for supervision, is just one such recommendation.

Breaking Down Cost by Activity 

 199

During the process of estimating how much of which resources are used for each activity, stakeholders may add resources that they realize are used for one or more activities but were left off the initial list. Sometimes this is the result of overspecifying the resources or of associating one resource with just one activity, such as “intake assessor” being listed as a resource solely for the activity “intake.” In most programs, the person conducting intake assessments participates in other activities as well. When moving to resources such as office space, it is more likely that different offices will be grouped as “individual counseling” space and others as “group meeting rooms.” As shown in Table 8.9, even more categories of resources are used when other resources are included, from equipment and supplies to medica-

TABLE 8.9. Resource × Activity Matrix for Equipment and Materials, Plus Indirect Service Resources

Equipment and Material Equipment (e.g., computers) Supplies (e.g., printing paper) Manuals Client homework sheets Medication Transportation Insurance Space cleaning Space maintenance Security Human resources Accounting Management/administration  

Other:         

Transition to SelfManagement

Relapse Prevention Training

Ongoing Assessment

Social Skills Training

Assignment to Treatment Team

Diagnosis

Initial Assessment

Resources ↓

Intake

Activities →

200 

  Evaluation for the Scientist–Manager–Practitioner

tion and even transportation. Additional services, such as insurance, cleaning, maintenance, and security, may be included in facilities costs or, in other circumstances (if, for instance, the space is owned by the program), can be listed in additional rows. Include activities that support other activities: estimating the probabilities and amounts of each possible resource × activity combination also can result in the addition of some activities and elaboration of categories of activities. This is especially likely if the exercise of listing activities and then resources for those activities is shifted from a retrospective, “what did we usually do?” focus to “let’s develop a list of all the ‘ingredients’ we would need to replicate our program in a different setting.” The latter perspective, which we term replicative, often fosters realization of the scope of activities that are absolutely necessary for the program to operate even if the activities do not involve participants directly. These services can include those offered by other programs that directly involve the participant (extra-­program direct services), such as public education via mass media and referrals to additional programs. Services also may be offered by other programs that do not directly involve the participant but do aid the program and make provision of direct services to participants possible. The latter are often called indirect services. Typically, indirect services include administration, management, and training conducted within the program (intra-­program). Some programs also are supported by additional activities provided by other programs, such as those provided by human resource and accounting departments in a hospital that supports satellite clinics throughout a region (extra-­program support services). Examples of direct services provided by programs, and indirect services provided within and outside the program, are given in Table 8.10. Services that support other services also can include internal program evaluations that regularly collect information on performance of specific treatment activities or that compare services actually performed to services that were supposed to be performed. Other support services, such as continuing education for providers and in-­service training, might be dismissed as unnecessary for replication but are essential enough to be required for continued licensure for many professional providers.

VALUING RESOURCES CONSUMED Rather than accepting a total cost figure derived from a budget or accounting summary, we encourage programs to explore and make clear to funders why they cost what they do. One straightforward way to do this is to delineate the specific resources that, when used to implement the activities of the



 201



. . .

Providers

Resources

Counseling

Relapse prevention

Intra-program (provided by the program)

Intake

Placement

Supervision of counselors

Examples Scheduling

Intra-program support services

Payroll

Service advertisements

Extra-program support services

Indirect Services (activities in which participants do not engage but which do support the program being evaluated)

Activities

Extra-program (provided by other programs)

Direct Services (activities in which participants engage)

TABLE 8.10.  Examples of Direct and Indirect Services, Intra- and Extra-Program in a Resource × Activity Matrix Total

202 

  Evaluation for the Scientist–Manager–Practitioner

program, lead to crucial changes in internal participant processes that in turn lead to targeted program outcomes. Getting from an estimate- or observation-­based matrix showing the amount of each resource spent in each activity (a resource × activity matrix) to a matrix showing costs of each resource type devoted to each activity involves the intermediate step of assigning a unit cost to each resource. A hypothetical example illustrates the three steps in the process. Step 1 involves identifying the amount of each major resource type used in each activity. As shown in Table 8.11, resource amounts are described in their natural units. For instance, personnel spend 200 hours in individual counseling and 300 hours in group counseling. In terms of space, 300 square feet are used for individual counseling and 600 square feet for group counseling. The activity of ongoing evaluation (a form of monitoring and evaluation) has resources assigned to it as well. Step 2 involves identifying the unit costs for each combination of resources and activities. As shown in Table 8.12, personnel are valued the highest for individual counseling (at $60 per hour). More valuable space is used for individual counseling (valued at $40 per square foot), with less valuable space (perhaps a basement room) used for group counseling. Unit costs for different resources used in different activities can be found in accounting records and budgets. If not available there, local “going rates” can be used. The time of people with different levels of education, training, certification, and experience can be valued using standard values for federal or provincial governments or pay rates found online in records from national departments of labor. The value of space can be found in real estate records or advertisements for comparable space in similar areas of the same location. Detailed methods for assigning unit costs can be found in a variety of sources, for example, Yates (1996, 1999).

TABLE 8.11.  Resources Consumed in Different Program Activities ← Program Activities → Program Resources ↓ Individual Counseling Personnel Space

Group Counseling

. . .

Ongoing Evaluation

200 hours

300 hours

. . .

40 hours

300 square feet

600 square feet

. . .

60 square feet

. . .

. . .

Administration

. . .



Breaking Down Cost by Activity 

 203

TABLE 8.12.  Unit Costs of Resources ← Program Activities → Program Resources ↓ Individual Counseling Personnel Space

$60 per hour

Group Counseling

. . .

Ongoing Evaluation

$40 per hour

. . .

$30 per hour

. . .

$20 per square foot

$40 per square foot $20 per square foot

. . .

. . .

Administration

. . .



We have found it useful to emphasize that accurate costing aids replication and dissemination of successful programs, rather than hindering or underfunding those with “lowball” estimates. Step 3 involves multiplying resource amounts by the unit cost of resources to find the resource cost per activity. For example, personnel cost for individual counseling would be 200 hours @$60 per hour = $12,000. This and the other multiplication operations shown in Table 8.13 result in the costs listed in Table 8.14. Some resources, frequently administration, as well as other “indirect” costs or “overhead,” are used to various degrees by most or all activities. For these resources and others that are difficult to observe being used in specific program activities such as participant counseling, one can find their total cost in budgets or accounting records. Then one would distribute that total cost over different direct service activities using the same proportions as found for total costs of those direct services. For instance, if total direct costs of one activity were twice those of another activity, the former

TABLE 8.13.  Calculating the Cost of Each Resource for Each Activity Program Resources ↓ Personnel Space

← Program Activities → Individual Counseling

Group Counseling

. . .

Ongoing Evaluation

200 hours @ $60 per hour

300 hours @ $40 per hour

. . .

40 hours @ $30 per hour

. . .

60 square feet @ $20 per square foot

300 square feet 600 square feet @ $40 per square foot @ $20 per square foot

. . .

. . .

Administration

. . .



204 

  Evaluation for the Scientist–Manager–Practitioner

TABLE 8.14.  Amount of Resource × Resource Unit Cost = Total Resource Cost ← Program Activities → Individual Counseling

Group Counseling

. . .

Ongoing Evaluation

Total Cost of Resources

Personnel

$12,000

$12,000

. . .

$1,200

$ 50,000

Space

$12,000

$12,000

. . .

$1,200

$ 30,000

. . .

. . .

. . .

. . .

. . .

$35,000

$30,000

. . .

$7,000

$100,000

Program Resources ↓

. . . Administration  

activity also should be assigned twice as much of administrative and other indirect or overhead cost as the latter activity. First, let’s consider a simple example; next, a more detailed, more realistic one. As our hypothetical example shows, total administration expenses are $100,000 (see Table 8.14). In Table 8.15, we show how this $100,000 is allocated to the specific activities in proportion to the total direct cost of the activities. For convenience of calculation in this hypothetical example, total cost of direct services also happens to add up to $100,000, thus making the proportional allocation of the $100,000 for administrative costs especially easy (see Table 8.15). The figure also shows the total costs for each activity. This is computed by simply adding the indirect (“Administration”) costs to the direct (“Total Cost of Direct Services”) costs. Results are shown in the bottom row of Table 8.15. TABLE 8.15.  Allocating Administrative Costs in Proportion to Total Direct Costs ← Program Activities → Individual Counseling

Group Counseling

. . .

Ongoing Evaluation

Total Cost of Resources

Personnel

$12,000

$12,000

. . .

$ 1,200

$  50,000

Space

$12,000

$12,000

. . .

$ 1,200

$  30,000

. . .

. . .

. . .

. . .

. . .

Total Cost of Direct Services

$35,000

$30,000

. . .

$ 7,000

$100,000

Administration

$35,000

$30,000

. . .

$ 7,000

$100,000

Total Activity Costs

$70,000

$60,000

. . .

$14,000

$200,000

Program Resources ↓

. . .



Breaking Down Cost by Activity 

 205

DEALING WITH UNMEASURED OR UNALLOCATED RESOURCES Now for a more complex, realistic example. Sometimes, it may be difficult or impossible to allocate to a specific activity certain resources, such as time and resources of administrative staff and program managers, as well as general meeting space and communications equipment. Excluding these resources from cost calculations would make the “bottom line” less, indeed, but could result in inadequate funding in the future. Excluding such major resources from cost calculations also disrespects their contributions to program outcomes. Resources unrelated to any particular activity can, of course, simply be apportioned among all activities. That is what is done with “overhead” resources, such as administrator time and office space used by all activities (e.g., conference rooms, restrooms, and hallways). Usually, resources that are too difficult to allocate to specific, direct service activities do need to be distributed somehow over activities. Although “assigning equal amounts to all activities” is one simple, tempting solution, this could distribute a large amount of a resource to an activity that consumes small amounts of all other resources. For example, $100,000 in administrator costs could be distributed in equal $20,000 amounts to each of five principal program activities. However, one or two of those activities (e.g., Meals on Wheels) could consume far fewer other resources than the remaining activities (e.g., homeless shelter and substance abuse treatment). It seems fairer, as well as more accurate, to allocate resources unrelated to specific activities in proportion to the total value of resources that are allocated to specific activities, as shown in Table 8.16. For this program, the activity known to consume the highest proportion of other resources ($10,000/$18,150 × 100 = 55%) was allocated the highest proportion of the unassigned “overhead” resource ($100,000 × 55% = $55,000). The figure also shows the intermediate steps of calculating the percentage for allocation of the unassigned resource to each activity.

SO, WHAT’S NEXT? The amount and current monetary value of resources consumed in different program activities have been delineated, quantified, and made available for analysis. This information can have intrinsic value for managers, funders, and participant advocates. But there is more that can be done with these data. The amount and monetary value of resources consumed to achieve specific outcomes can be isolated if causal paths from resources through processes altered by program activities are followed through to outcomes.

206 



$2,000

Intake $1,000

Placement $1,000

Supervision of Counselors $500

Scheduling

$250

Payroll

$12,500

$12,500

$12,500

$12,500

$12,500

$12,500

$12,500

$400

Service Advertisements

Extra-Program Support Services

$3,000/ $18,150=

17%

$16,529

$10,000/ $18,150=

55%

$55,096

6%

$1,000/ $18,150= 6%

$1,000/ $18,150=

$5,510

$5,510

$2,755

3%

$500/ $18,150=

above % of $100,000 “overhead” resource value = $11,019

11%

$2,000/ $18,150=

$1,377

1%

$250/ $18,150=

$2,204

2%

$400/ $18,150=

allocation of overhead resources in proportion to total known resource value, activity’s percentage of total known ($100,000) resource cost:

$12,500

$3,000

Relapse Prevention

Intra-Program Support Services

Examples

Extra-Program (provided by other programs)

Indirect Services (activities in which participants do not engage but that support the program being evaluated)

“equal amounts to all” allocation of overhead resources: $100,000/8 activities = $12,500 for each of the 8 activities

$10,000

Total

Overhead Resource (not allocated to specific activities)

Counseling

Resources

Intra-Program (provided by the program)

Direct Services (activities in which participants engage)

Activities

TABLE 8.16.  Alternative Approach for Allocating Indirect or “Overhead” Resources among Program Activities

$100,000

100%

$100,000

$100,000

$ 18,150

Total

Breaking Down Cost by Activity 

 207

This full use of RAPOA—of resources → activities → processes → outcomes analysis—­is the topic of the next chapter.

SUMMARY A fundamental logic model for cost-­ inclusive evaluation begins with resources used to conduct program activities. These activities are designed to start, increase, decrease, or end specific processes occurring inside the participant, whether that participant is defined as an individual, family, school, or firm. Particular participant processes were chosen for modification because they are most likely to lead to the outcomes desired for the participant, from short-term outcomes such as acquiring a certain skill to graduating and, in the longer term, gaining better paying employment. Often theory and principles from psychology, sociology, or social work are used to select which processes to modify in the program. Adding qualitative and quantitative information or data to this model makes possible RAPOA, a form of cost-­ effectiveness and cost-­ benefit analysis that goes beyond an often-­simplistic input–­output approach to understanding and improving organizations to a systematic, data-based approach. Specific methods were described for designing and refining instruments to collect concrete information for RAPOA models, focusing especially on resource → activity relationships in this chapter. Examples of results of using these methods are provided. The process of working with different interest groups to list and define program resources and activities can itself be formative and a form of evaluation. Logging time and other resources spent in the different activities can refine and eventually validate resource and activity lists. Analyzing data from resource → activity logs can foster better understanding of resource → activity relationships. These are best depicted and analyzed in resource × activity tables that list resources used in rows and program activities in columns. Quantitative and qualitative data entered into cells include estimates or, better, actual reports by program staff of „ time spent by different program staff in activities with specific pro-

gram participants, „ space, equipment, and other materials devoted to participant and

administrative services, and „ time spent in administrative and “overhead” activities.

208 

  Evaluation for the Scientist–Manager–Practitioner

Costs of the natural units in which resources occur, such as hours of staff time and square meters of program space—unit costs—are placed in resource × activity tables so that the amount of each resource used can be multiplied by the cost of a unit of each resource to arrive at the monetary value, that is, the cost, of each resource for each activity for each participant. Methods are described, and examples provided, for allocating administrative and other overhead resources among specific activities and participants.

Breaking Down Cost by Activity

209

DISCUSSION QUESTIONS

For this and the following chapter, choose a program with which you are familiar, such as a health service, a manufacturing company, a government service system, or even the degree program in which you or a friend are enrolled. (1) In a table, list in different rows the major resources of the program—its key “ingredients,” without which the program could not operate. Time and exper‑ tise, space, and equipment are common types of resources, but being more specific can provide more insights later in the evaluation. (2) Now list in columns the major activities of the program—what you understand to be the essential operations of the program. Examples can include, for health services such as a hospital or clinic, intake, admissions, diagnosis, treatment, and discharge. For an education program, consider admission, matriculation, enrollment in specific courses, capstone or portfolio completion, and gradua‑ tion. Don’t forget to include operations that may come later in program opera‑ tions, such as follow‑up for a treatment or development for alumni of an edu‑ cation program. (3) Show your draft resources × activities table to someone else in your class. Ask him or her to role‑play a representative of another interest group, such as a faculty member (if you are working with a degree program example), or a provider (if you are working with a health service example). You will gain more insights (and more rows and columns, typically) if your colleague takes on a role that is markedly different from the role you adopted. For example, if you are imagining you are a provider of government services, ask your colleague to imagine he or she is a recipient of some of those government services. (4) Ask your colleague to construct a table of his or her own resources and activi‑ ties that he or she imagines to be essential to the program. Service recipients may need to be asked to imagine their usual visits to the program, for instance. If they took a bus or drove, that transportation would be pretty essential! (5) For each resources × activities table, fill in as many cells in the table as pos‑ sible, and ask your colleague to do the same. Estimate how much time usually is spent getting to and from the service site (e.g., classroom, clinic, office). If a computer, tablet, or phone is needed for the service, estimate how much that costs to rent. Then write those minutes of time, that distance traveled, that amount of computer rent that would be paid, into the table. Estimates are bet‑ ter than nothing. High, “best‑guess,” and low estimates are fine; they indicate

210

EVALUATION FOR THE SCIENTIST–MANAGER–PRACTITIONER

the variability common between different participants and different interac‑ tions or sessions of engagement with the program. (6) Find, from local salaries, benefits, rental rates, and so on, the rough unit costs of each resource. Then multiply that unit cost by the amount of time, the dis‑ tance traveled, and so on to estimate the cost of each resource for each activ‑ ity. Then total across rows to find the total approximate cost of each activity and across columns to find the total cost of each resource used. The results can be insight provoking.

C H A P T E R

9

Completing the Model with Activity → Process and Process → Outcome Analyses

S

o far, we have recommended that evaluation move beyond approaches that ignore the “cost” half of the equation that makes services possible and desired outcomes more probable. We also have urged evaluators to advance beyond approaches that conceptualize programs as static “black boxes” that cannot be examined or improved, as detailed in Yates (2021). Traditional economic evaluation, and some traditional human services research, quantifies and compares inputs and outputs. Relationships between “costs in” and “outcomes out” may be expressed as correlations, as differences such as benefit-­cost (net benefit), or as ratios such as benefit/ cost, cost/effectiveness, or cost/utility, as in cost per Quality-­Adjusted Life Year (cost per QALY). There even may be speculation about the events and processes occurring inside the program, but these are seldom acknowledged explicitly outside of the “Discussion” section of an article or report, and they rarely are incorporated into an evaluation report. If activities internal to the program or processes internal to the participant are assessed, measurement of those typically is spotty or consists only of anecdotes and case descriptions. Formative evaluation, including cost-­inclusive evaluation, goes beyond costs, beyond outcomes, and beyond speculation about what goes on between the two. Instead, formative cost-­inclusive evaluation opens up the “black box” of program operations for quantitative examination (Taxman & Yates, 2001; Yates, 1997). Information is collected systematically to describe the key relationships between what the program does and what it attempts to change in individuals, couples, families, or communities. Pro

 211

212 

  Evaluation for the Scientist–Manager–Practitioner

grams themselves are asked to reveal what they do by describing chains of specific service activity sequences. Programs also are asked to describe why they engage participants in those activities, specifying the changes in processes that are targeted inside participants as a result of each activity, as well as the observable outcomes that are supposed to result from those changes. That’s what resources → activities → processes → outcomes analysis (RAPOA; Yates, 1997) is all about.

DISCOVER THE BIOPSYCHOSOCIAL PROCESSES THAT MAKE A PROGRAM WORK The next two steps in building an RAPOA model for cost-­inclusive evaluation use the concept of intra-­participant, often biological, psychological, or social (biopsychosocial) processes. These, and their measurement, need to be explored before we evaluate activity → process and process → outcome relationships. Yates (the second author) is a psychologist—­ a researcher, not a clinician—­but that still leads him to posit that there’s something going on between our ears . . . and between the things that other people around us do or say and our subsequent behavior. Few of us would deny that our or others’ actions can lead us to feel excited, angry, happy or sad, or even bored. Although feelings and thoughts certainly can follow rather than cause our behavior (cf. Bandura, Blanchard, & Ritter, 1969), few of us would refute the causal role that emotions and cognitions can have in causing some outcomes. Sometimes the result of specific program activities, from a prevention-­focused appeal to an anxiety-­ameliorating therapy session, is less obvious to others but quite noticeable to us in terms of changes we notice in our internal processes. Our less desirable temptations may be diminished, our exercise and other healthy behaviors may be made more likely, and our interactions with others may be improved. Viewed through the RAPOA model, what that other person did was an activity, what we then felt or thought was a biopsychosocial process, and what we then did was an observable behavioral outcome. In a program, the activities in which participants are invited to engage are planned with great care so that they produce the outcomes desired by changing specific processes. Why add processes to the model? As a body of psychological research has demonstrated, the reliability of desired outcomes occurring following specific activities can be improved if the intervening processes are known (Bandura, 1997a). Relationships between activities, processes, and outcomes can be complex. Activities can result in internal processes that prompt other internal processes that ultimately result in the desired

Activity → Process and Process → Outcome Analyses 

 213

changes in behavior. It also is likely that changes in behavior can prompt subsequent changes in emotions and cognitions, from pride in a job well done or a temptation successfully resisted to higher self-­efficacy expectations, all of which may make maintenance and further improvement in behaviors more likely (Bandura, 1997b). When the outcomes do not occur, research suggests that the reason often is that the necessary process linking the activity and outcome did not occur. For example, Kissel-­VanVoolen (cited in Yates, 2002) found that a substance use prevention program had outcomes that were the opposite of what was intended: Adolescents, especially female adolescents, increased rather than decreased intent to use and use of alcohol, tobacco, and other drugs (ATODs). Analyses of data on process → outcome relationships suggested that the targeted process of social responsibility actually decreased rather than increased following participation in prevention program activities. As predicted, the process of social responsibility and the outcomes of expressed interest in and use of ATODs were linked, such that decreased social responsibility corresponded to increased ATOD interest and use. Including processes in the program model helped evaluators understand why the program was iatrogenic despite the best intentions of program developers and activity implementers. Interestingly, additional analyses found that a common (and the least expensive) activity (small student groups) was responsible for decreased social responsibility. Of course, measuring biopsychosocial processes identified in program planning does not always provide a complete picture of “what’s going on” inside participants served by the program. Understanding the social context in which the participant engages in program activities can add rich information to the program model being developed. The RAPOA model also can be improved by including not only the biopsychosocial processes targeted by program activities but also those social, psychological, and biological processes that could compete with, sabotage, or completely reverse the intended effects of the processes that were the focus of program activities. Knowing, for instance, that females in a literacy program had been threatened with physical harm by extremists could explain why threatened women were not paying as much attention (an internal, cognitive process) to activities that normally result in improved reading to produce the outcomes of improved reading comprehension. Careful assessment of program outcomes in controlled settings also could help evaluations discern whether the literacy activities failed to produce reading outcomes due to competing processes (anxiety) or whether they had succeeded but were not apparent on tests due to reluctance of the women to demonstrate acquired reading skills out of fear of reprisals by persons who might later have access to test scores.

214 

  Evaluation for the Scientist–Manager–Practitioner

Knowing which processes are supposed to be modified by program activities can be easy: The program may declare those processes when applying for funding and even announce them in appeals to potential participants. It is more difficult to learn which other processes may unexpectedly affect processes targeted by program activities or affect outcomes directly. Experience in the community in which the program operates and theories about program operations both can play important roles in highlighting problematic and facilitative processes. For example, a literacy expert in a capital city may not understand why literacy enhancement activities failed to produce the usual positive outcomes. Anyone “on the ground” in the program’s rural setting could tell the expert why the activities failed: because of competing processes, including severe anxiety! Another, more systematic way to identify biopsychosocial processes that may aid or deter desired outcomes is to use theory and research findings to create a comprehensive RAPOA model. Often, major forces outside the program may be controlling participant processes and outcomes more strongly than the activities in the program and in rather obvious ways that can escape evaluators who are focused primarily on program activities. For example, economic recession can reverse the effectiveness of programs designed to help psychologically distressed participants become employed.

PSYCHOLOGICAL AND OTHER METHODS OF MEASURING PROCESSES If biopsychosocial processes are so important to include in program evaluation, how does one do that? If you have training in a social science, having a BA, MA, or PhD in sociology, education, anthropology, psychology, or human neuroscience, for example, you already know the answer. We have instruments for measuring most processes that support many of the activities → processes → outcomes links posited in most programs. Although qualitative measures are common in sociology and some areas of psychology, quantitative measures are more so: Both have their place in constructing a comprehensive model of a program on which people in most interest groups can agree. The better and more widely endorsed the model is, of course, the more likely it is that it will be used and maintained by program operators, funders, and participants. Many social scientists have devoted careers to creating reliable, valid, and inexpensive measures of processes that are easy to use. For many processes, questionnaires are often available that participants complete, which can be scored automatically or with simple arithmetic. Common, accessible catalogues of these are available online (e.g., the Buros Mental Measure-

Activity → Process and Process → Outcome Analyses 

 215

ments Yearbook, n.d; https://buros.org/mental-­measurements-­yearbook). Curated databases of peer-­reviewed research literature, such as PsycINFO and other offerings by the American Psychological Association, also can be searched quickly for research findings about a process and, often, the specific program activities that can move a process in a way that makes hoped-for outcomes more likely. Literature databases, including PsycINFO, can automatically send new findings about processes, activities, and outcomes of interest daily, weekly, or monthly. For some human services and areas of health research, consensus has created sets of measures that funders encourage all researchers to utilize. These common measures can reduce ambiguity about findings (e.g., PhenX Toolkit, n.d.).

METHODS FOR FINDING BIOLOGICAL, PSYCHOLOGICAL, AND SOCIAL PROCESSES THAT FOSTER PROGRAM OUTCOMES A variety of research designs and statistical techniques has been developed to measure the existence, direction, and strength of relationships between each component of RAPOA models. Many texts and professional tomes, such as those by Kazdin (2003) and Shadish, Cook, and Campbell (2002), detail these designs, provide examples of their use in applied settings, and help you choose those that best fit a program and evaluation context. The choice need not be between either “rigorous but impossible to really do or afford” factorial designs that examine effects of each possible combination of activities on process and then on outcomes of interest, or designs so “quasi” as to be case studies without comparison groups. An assortment of efficient designs has been developed over decades of emphasis on finding empirical bases for treatments and other interventions, as detailed by Collins, Murphy, and Strecher (2007), among others. Statistical techniques for analyzing the direction, significance, and strength of relationships between activities, processes, and outcomes include multiple regression, hierarchical linear modeling, and structural equation modeling. Modern, widely available statistical software such as R, SAS, and STATA make it easy to conduct these and other analyses with even large datasets—­almost too easy, given the importance of considering how data are collected and how to interpret software output (cf. Mitchell, 2016). Courses in statistics can avoid drawing incorrect conclusions using these powerful statistical apps. For instance, it is common to include in linear models of therapy outcomes not only measures of activities such as the “dose” of therapy (e.g., number of sessions of fixed duration) but also measures of processes such as specifics of the therapist–­participant relationships (e.g., “alliance,” or

216 

  Evaluation for the Scientist–Manager–Practitioner

participant self-­efficacy). These also can add proxies for developmental processes such as age and for demographic moderators (e.g., gender, ethnicity) of activity → process → outcome relationships. For example, to understand which processes might foster or impede continuing to offer services in Clubhouses (a form of day program for participants with severe psychological problems; see McKay, Yates, & ­Johnsen, 2007), Gorman and colleagues (2018) developed a conceptual model of what does and does not sustain these programs. Gorman and colleagues interviewed Clubhouse directors and members. Using qualitative tools, they found that key informants posited Clubhouse longevity to be positively related to 1. renewable or steady funding, 2. diversity of funding, 3. having a resilient and forward-­thinking Clubhouse director, 4. a supportive auspice agency or board of directors, 5. accreditation, 6. advocacy, and 7. current Clubhouse effectiveness Making explicit the conceptual model created by Gorman et al., Figure 9.1 shows how these determinants can be sorted into an RAPOA model. A key concept here is that the individual unit being studied in this context is not an individual person but an individual program, that is, individual Clubhouses. Including funding sources in the resources category makes sense: As explained earlier in this book, money is a means for acquiring resources of staff, space, participants, and so on. Renewable and diverse funding sources would be important for a Clubhouse program to continue to assemble resources for its activities. The activities listed in Figure 9.1 also seem a fairly obvious fit. Processes listed in Figure 9.1 were not “processes” in the sense of being a type of biopsychosocial event occurring internally for individual people. However, increased employment of Clubhouse members can be said to be both the result of Clubhouse activities and the means for producing the outcome of sustained Clubhouse operations. Again, a crucial idea for constructing this model of program sustainability is conceptualizing processes at the level of specificity at which Gorman and colleagues conducted their analyses, that is, at the level of the individual Clubhouse. This was the same level at which the other components of the RAPOA model already were placed. Additional processes occurring inside Clubhouses were posited as potentially related to Clubhouse demise or longevity: This work helped reconcile findings of quantitative tests of the model constructed by qualitative means.

Activity → Process and Process → Outcome Analyses RESOURCES

ACTIVITIES

PROCESSES

a. Renewable or study funding b. Diversity of funding sources c. Resilient and forward-thinking Clubhouse director d. Supportive auspice agency or board of directors

1. High fidelity of Clubhouse activities (as indicated by full accreditation of the Clubhouse) 2. Advocacy for Clubhouse in the community

A. Increased employment of Clubhouse members in independent settings, i.e., Clubhouse effectiveness

217 OUTCOMES i. Sustained operations of the Clubhouse

FIGURE 9.1 RAPOA model of sustainability of Clubhouse programs from qualitative

analyses.

Theory and prior research can guide selection of the resources, activities, processes, and outcomes to consider in the model. As an evaluation plan increases the number of people for whom activity, process, and outcome data can be collected, one escalates the number of variables that can be considered in the model. Statistical power analysis (e.g., Cohen, 1992) offers a quantitative means of deciding how many variables should be considered without sacrificing confidence that one will be able to detect activity → process → outcome relationships if those relationships actually exist. For most evaluation plans, statistical power can be calculated readily with statistical software such as STATA and G*Power, for which more information is listed at the end of the chapter. Moving from the qualitative model-building process to quantitative testing, Gorman et al. (2018) conducted survival analyses that tested the statistical significance of quantitative measures of most components of the Clubhouse sustainability model shown in Figure 9.1. As is often the case, unambiguous quantitative measures were not available for all the determinants isolated by the interviews. Gorman and colleagues did find that Clubhouses lasted longer if they (1) had their own independent management, (2) had full fidelity (i.e., full accreditation) for implementing prescribed Clubhouse activities, (3) had two or more funding sources, and (4) did not have financial support from managed care. Several processes internal to Clubhouses were posited to explain the unexpected finding that having access to managed care did not help and actually reduced program sustainability. Gorman and colleagues hypothesized that some managed care systems may not have met the needs of some

218 

  Evaluation for the Scientist–Manager–Practitioner

Clubhouses. In particular, paperwork and approvals mandated by medical billing systems could have been so burdensome as to make them unaffordable to smaller Clubhouses. Also, billing codes used in some managed care systems may allow payment for only a subset of Clubhouse services. Ideally, these posited intra-­program processes would be examined in a new cycle of testing before being adopted into the RAPOA model and a final evaluation report.

COMPLETE THE MODEL: ASSESS EXISTENCE, DIRECTION, AND STRENGTH OF ACTIVITY → PROCESS AND PROCESS → OUTCOME RELATIONSHIPS It is one thing to suggest, as we did earlier, that an RAPOA model be constructed on the basis of theory. It is an entirely different effort to actually construct such a model. Many student researchers confront this problem when designing their first study, because they are often not told what to investigate by their advisor. What ensues can be a variation of the following, more structured effort: construct activity × process and process × outcome matrixes for the program being evaluated.

Evaluate Activity → Process Relationships Why the program is doing what it is doing is particularly important to an improvement-­oriented evaluation. Discovering the reasons for implementation of different activities in a program is even more time-­consuming and threatening an undertaking than only measuring program outcomes and program costs (major challenges by themselves!). For example, improving social responsibility, enhancing communication with parents, and better feelings about school were all processes targeted for adolescent participants in the substance abuse prevention program evaluated by Kissel-­VanVoolen (cited in Yates, 2002). These are shown in Table 9.1 in the five columns from the right. Program activities designed to increase social responsibility are listed in the second column from left. These activities included student small groups, field trips, individual meetings, camping trips, home visits, and parent group meetings. The activity → process relationships that the program designers hoped would occur are shown in the third and subsequent columns from the left in Table 9.1. All activity → process relationships were predicted to be positive, as shown by the “+” on the left sides of the diagonal slashes in each cell. The actual relationships found by the 3-year evaluation conducted at several schools are shown on the right sides of the diagonal slashes in each

Activity → Process and Process → Outcome Analyses 

 219

cell in Table 9.1. Statistical analyses found that many of the activity → process relationships were not significantly different from a null (∅) relationship (a correlation of .00). One activity → process relationship was significantly negative (“–”), contrary to expectations. No activity → process relationships were found to be significantly positive, unfortunately (which would have been indicated by “+” to the right of the diagonal slashes in the cells of Table 9.1). A comprehensive evaluation that gathers enough information to help programs be the best they can be opens up another “black box” in program evaluation: the program participant. This “box” includes the psychological processes that operate within individual people, as well as the social processes that operate between individuals in families, couples, teams, squads, work groups, and communities. Even the biological processes that occur within and between individuals are “fair game” for a thorough program evaluation; hence our use of the adjective biopsychosocial when referring to processes involving the participant.

TABLE 9.1.  Qualitative Designation and Direction of Activity → Process Relationships in RAPOA Activity → Process Relationships predicted/found by qualitative evaluation: “+” direct relationship “–” inverse relationship “∅” no significant relationship Student Processes

Program Activities

Student communication



Social responsibility

with mother

with father

with both parents

Feelings about school

Student groups ($368 per student participant)

+/–

+/∅

+/∅

+/∅

+/∅

Field trips ($458 per student)

+/∅

+/∅

+/∅

+/∅

+/∅

Individual meetings ($459 per student participant)

+/∅

+/∅

+/∅

+/∅

+/∅

Camping trips ($1,332 per student participant)

+/∅

+/∅

+/∅

+/∅

+/∅

Home visits ($1,630 per student)

+/∅

+/∅

+/∅

+/∅

+/∅

Parent group meetings ($655 per student)

+/∅

+/∅

+/∅

+/∅

+/∅

220 

  Evaluation for the Scientist–Manager–Practitioner

Evaluate Process → Outcome Relationships Different participants may respond differently to the same program activities, with resulting differences in program outcomes. Learning what biopsychosocial processes are or are not occurring within the individual or group in response to program activities can be essential to understanding why programs work for some participants and not for others (e.g., for understanding differences in responsiveness to the same treatment). In the prevention program evaluated by Kissel-­VanVoolen (cited in Yates, 2002), for instance, the psychosocial processes targeted by program activities were supposed to lead to the outcomes shown in Table 9.2. The process → outcome relationship predicted for changes in social responsibility was, in fact, supported by data collected during the evaluation. Specifically, the process of increased social responsibility was found to be inversely related to the outcomes of willingness to use drugs and actual use of drugs, as predicted by theory and prior research. The problem was that, as shown in the activity → process analyses described earlier, program activities actually changed the social responsibility process in the opposite direction from what was intended. No wonder the prevention program actually increased rather than decreased use of, as well as willingness to use, gateway drugs! Looking back at how much students participated in specific program activities, decreased social responsibility followed more student participation in small groups, especially for female fourth graders. This was both TABLE 9.2. Process × Outcome Matrix for Substance Abuse Prevention Program for Adolescents Process → Outcome Relationships predicted/found by qualitative evaluation: “+” direct relationship “–” inverse relationship “∅” no significant relationship Student Outcomes

Student Processes

Willingness to use gateway ATODs

Willingness to use all ATODs Actual use of ATODs

Social responsibility

–/–

– /–

–/–

Student communication with mother

–/∅

–/∅

–/∅

Student communication with father

–/∅

–/∅

–/∅

Student communication with both parents

–/∅

–/∅

–/∅

Feelings about school

–/∅

–/∅

–/∅

Note. ATODs, alcohol, tobacco, and other drugs.

Activity → Process and Process → Outcome Analyses 

 221

the least expensive program activity in terms of staff time, facilities, and equipment and materials and the most iatrogenic. Perhaps an initial, small-scale test of specific program activities, followed by measurement of anticipated changes in psychosocial processes such as social responsibility, could have warned program managers that the planned activities would not lead to the desired outcomes. We recommend this sort of pilot testing of predicted activity → process relationships as a form of early formative evaluation, prior to the large-scale implementation of programs, even for program activities that “make sense” or seem, intuitively, to inevitably lead to desired outcomes. Rather, these are empirical questions that can be answered by evaluations of trial programs.

MIXED METHODS RAPOA THAT QUANTIFIES RESOURCE USE FOR CHANGES IN ACTIVITIES, IN PROCESSES, AND IN OUTCOMES Working with the model developed by Lockwood-­ Dillard and Yates (detailed in Yates, 1999), assignment of specific costs to attainment of specific outcomes can be illustrated in a mixed methods RAPOA. A mixture of quantitative estimates generated in focus-group-style discussions among program staff used the lists of resources, activities, processes, and outcomes shown in Table 9.3 to form successive resource × activity, activity × process, and process × outcome matrixes, using methods detailed earlier. TABLE 9.3.  RAPOA Logic Model Lists by Staff of a Residential Program for Substance-Abusing Participants Resources

Activities

Processes

Outcomes

Direct service staff Group counseling

Self-efficacy expectancies

Complete abstinence from drugs

Administrative staff Relapse prevention training

Relapse prevention skills

Stable employment

Facilities

Individual counseling

Support access skills

Avoidance of all criminal behavior

Utilities

Case management

Service access skills

Compliance with probation and parole

Support staff

Bonding with addicts and ex-offenders

Supplies

Bonding with counselors

Urine drug tests Source: Yates (1999).

222 

  Evaluation for the Scientist–Manager–Practitioner

Although program staff and administration generated the lists of resources, activities, processes, and outcomes shown in Table 9.3, inclusion of participants, funders, and other interest group representatives in these discussions would have increased validity of this logic model for cost-­inclusive evaluation. This also would have increased the time required for the RAPOA, however, and time was exceptionally constrained for this evaluation.

RESOURCE × ACTIVITY ANALYSIS MATRIX Table 9.4 lists resources in rows and activities in columns for a typical participant for a month of program operation. In an earlier version of Table 9.4, “Procedures” was used in place of “Activities” in this evaluation, but later evaluations showed that participants of mental health services can be offended or reexperience trauma when “procedures” are mentioned in discussions. (Think “electroconvulsive shock” when you read “procedure,” and these reactions become quite understandable.) Given this education, Yates pledged to use “Activities” in all future RAPOAs. Total amounts of each major resource used in a week (e.g., $2,500 for direct service staff members), were estimated by staff members. They quickly determined that use of several resources was linked such that those resources would be used in about the same proportion by a given program activity. For example, according to staff, the resources Direct Service Staff, Administrative Staff, Facilities, and Utilities were used equally for two of the four program activities: Relapse Prevention Training and Individual Counseling. Case Management used somewhat more, and Group Counseling used somewhat less. The resulting percentages are shown in the resource × activity matrix (Table 9.4). The percentages were applied to the total cost of the four linked resources to calculate the allocations of $612, $782, $782, and $1,224 for the four activities listed in columns on the matrix. Other resources were specific to one or two activities. Support Staff, for example, contributed only to Relapse Prevention Training and Case Management, and to similar degrees, so 50% of the $500 total ($250) for Support Staff was allocated to each of those two activities. Staff also decided that Supplies were used similarly by all activities, though somewhat more by Relapse Prevention Training and Case Management, so each of those were allocated $150 of Supplies and the remaining two activities $100 of Supplies. Totals for each column, shown in the bottom row of Table 9.4, depict the total cost of each program activity. These values alone provided new insights for program staff, helping them see quantitatively, based on their own judgments, how much each major program activity cost in absolute and relative terms.

Activity → Process and Process → Outcome Analyses 

 223

TABLE 9.4. Resource × Activity Matrix Activities

Resources

Total

Group Counseling

Relapse Prevention Training

Individual Counseling

Case Management

Direct service staff ($2,500) $3,400 $3,400 × 18% $3,400 × 23% $3,400 × 23% $3,400 × 36% = $612 = $782 = $782 = $1,224 Administrative staff ($250) Facilities ($500 Rent) Utilities ($150) Support staff ($500)

$500

Supplies ($500)

$500

Urine drug tests ($1,000)

$1,000

Total Activity Cost

$5,400

$500 × 50% = $250 $100

$150

$500 × 50% = $250 $100

$150

$882

$1,624

$1,000 $712

$2,182

Source: Yates (1999).

ACTIVITY × PROCESS ANALYSIS MATRIX To describe the existence and strength of the next set of causal linkages in the cost-­inclusive program logic model—that is, between activities and processes—­activities were moved to rows and processes listed in columns of a new matrix (Table 9.5). The processes were taken directly from the variables list developed in initial conversations with staff members. Values for the cells of this matrix could have been obtained after time-­ consuming research administering psychological measures of each process to each participant who received program services. In the interest of time and to minimize costs of the cost-­inclusive evaluation, staff members were more comfortable describing the fractions of total time and effort that were spent altering, in therapeutic directions, each of the six biopsychosocial processes listed in the matrix columns. This was simpler than might be anticipated: For most processes, staff members simply indicated whether, for a given program activity, the process was or was not targeted by that activity. The discussion proceeded row by row. For instance, beginning with the top activity row, staff members readily voiced that the activity Group Counseling addressed three of the six processes: Self-­Efficacy Expectancies, Bonding with Addicts and Ex-­ Offenders (labels selected by staff), and Bonding with Counselors.

224 

$1,114

$436

$436

Note. Figures rounded for ease of analysis, so some cross-verification totals may be off by a few dollars.

Source: Yates (1999).

Total Cost of Processes ($5,400)

Case management ($1,624)

$1,218

$1,624 × 75% = $1,218

$876

$1,624 × 12½% = $203

$882 × 50% = $441

Individual counseling ($882)

$2,182 × 20% = $436

$2,182 × 20% = $436

$2,182 × 20% = $436

Bonding with Addicts and Ex-Offenders

$2,182 × 20% = $436

Service Access Skills

Relapse prevention training ($2,182)

Support Access Skills

$712 × 331/3% = $237

Relapse Prevention Skills

Bonding with Counselors

$1,317

$1,624 × 12½% = $203

$882 × 50% = $441

$2,182 × 20% = $436

$712 × 331/3% = $237

Bonding

$712 × 331/3% = $237

Self-Efficacy Expectancies

Skill Acquisition

Processes

Group counseling ($712)

Activities (costs carried over from Resource × Activity Matrix, Table 9.4)

TABLE 9.5. Activity × Process Matrix

Activity → Process and Process → Outcome Analyses 

 225

Staff members also found it difficult to determine whether Group Counseling addressed one of these three processes more than another, so the total cost of Group Counseling was distributed equally over the three processes, that is, the $712 of Group Counseling was divided by 3, resulting in approximately $237 spent on each of the three processes. As shown in Table 9.5 for each activity, other combinations of processes were addressed to similar degrees by each program activity. Only one activity was deemed to focus more on one of the chosen processes than another. Staff members reported that about three-­quarters of Case Management focused on acquiring skills for Service Access, while the remaining one-­quarter was devoted to the two Bonding processes. So, as shown in the next-to-­bottom row of Table 9.5, three-­quarters of the Case Management cost of $1,624 (i.e., $1,218), was spent on Service Access skill development. The remaining one-­quarter (i.e., $406), was split equally between the two types of bonding: Bonding with Addicts and Ex-­ Offenders, and Bonding with Counselors. Total costs of program resources devoted to modifying each process were found by simply summing costs down rows for a given column. For instance, the program cost of Self-­Efficacy Expectancies was the sum of the costs of Group Counseling, Relapse Prevention Training, and Individual Counseling devoted to Self-­Efficacy Expectancies, that is, $237 + $436 + $441 = $1,114. This and costs of modifying the remaining five biopsychosocial processes are listed in the bottom row of the activity × process matrix. Seeing and comparing these costs provided additional insights to the program staff, administrator, and evaluator about the relative amount of resources spent on addressing each process. It was somewhat surprising to find that the most resources were devoted to the process Bonding with Counselors, although this seems congruent with the idea of enhancing a therapeutic alliance.

PROCESS × OUTCOME ANALYSIS MATRIX For the final matrix in the RAPOA series (Table 9.6), processes were moved to rows and outcomes from the program logic model were listed in columns. Much traditional research in psychological and related disciplines devotes considerable time and funds to precisely measuring the relationships between changes in biopsychosocial processes in participants and changes in observable outcomes. Because the processes posited by staff were rather unique, as were the outcomes, research could not be summoned easily from the literature to complete cells of the following process × outcome matrix. Instead, staff members were asked to estimate how much each process affected each outcome. Again, these discussions went row by row. Again, the discussion began not with requests for specific amounts of time but with

226 

  Evaluation for the Scientist–Manager–Practitioner

TABLE 9.6. Process × Outcome Matrix Outcomes Complete Processes (costs carried Abstinence from over from Activity × Drugs Process Matrix, Table 9.5) Self-efficacy expectancies ($1,114)

$1,114 × 40% = $446

Stable Employment

Avoidance of All Criminal Behavior

$1,114 × 20% = $223

$1,114 ×40% = $446

$436 × 25% = $109

$436 × 25% = $109

Compliance with Probation and Parole

Skill acquisition Relapse prevention skills ($436)

$436 × 100% = $436

Support access skills ($436)

$436 × 25% = $109

$1,218 × 80% = $974

Service access skills ($1,218)

$436 × 25% = $109 $1,218 × 20% = $244

Bonding Bonding with addicts and ex-offenders ($876)

$886 × 32% = $284

$886 × 4% = $35

$886 × 32% = $284

$886 × 32% = $284

Bonding with counselors ($1,317)

$1,317 × 10% = $132

$1,317 × 40% = $527

$1,317 × 10% = $132

$1,317 × 40% = $527

Total Cost of Outcomes ($5,400)

$1,407

$1,868

$971

$1,164

Source: Yates (1999). Note. Figures rounded for ease of analysis, so some cross-verification totals may be off by a few dollars.

whether a given process (e.g., Self-­Efficacy Expectancies) determined one, some, or all of the four outcomes. Staff decided that Self-­Efficacy Expectancies was a key determinant of two outcomes: being drug free (Complete Abstinence from Drugs) and being crime free (Avoidance of All Criminal Behavior). That said, they also noted that the same process contributed to Stable Employment. Staff settled on the percentages shown in the row labeled “Self-­Efficacy Expectancies.” Total cost of Self-­Efficacy Expectancies was brought from the preceding activity × process matrix to this process × outcome matrix. Then that total of $1,114 was distributed over outcomes using the percentages generated by staff discussion (e.g., $1,114 × 40% = $446 rounded). Similar discussions resulted in the percentages shown in most cells of the process × outcome matrix and in the costs found by multiplying each percentage by the total program costs, focusing on modifying a given process; for example, 40% of the $1,317 of program resources devoted to Bonding with Counselors, that is, $527 rounded, was allocated to the outcome Compliance with Probation and Parole.

Activity → Process and Process → Outcome Analyses 

 227

Process costs were summed down each outcome column to find the total cost of achieving each outcome, for example, $1,407 devoted to being drug free. This is not the same as $1,407 being the cost of actually achieving freedom from drug use, but it does provide program staff, administrators, participants, and evaluators with a better idea of how much of the different program resources are focusing on (hopefully) achieving each program outcome. Research measuring the degree to which targeted outcomes were achieved and the degree to which different processes were modified by the different activities could find that cost per drug-free day, week, month, or year.

FORMATIVE FINDINGS OF COST‑INCLUSIVE MIXED METHODS EVALUATION It is common to assume that cost-­inclusive evaluation is a largely summative, judgmental form of evaluation that will result in a decisive “bottom line.” However, examples from accounting analyses in the preceding chapters, as well as the preceding mixed methods evaluations, illustrate how cost-­inclusive evaluation can be particularly formative. The insights possible from conducting cost-­inclusive evaluation also can go beyond comparisons of input “costs” and output “outcomes.” RAPOA can help evaluators achieve clear, explicit understanding of why a program engages in the activities it chooses to modify the processes it thinks will affect desired outcomes. Hopefully, costs found in the preceding example for different activities, processes, and outcomes illustrate how resource amounts and values can permeate each element in a logic model of a program, not just its end product. This occurs, at least in part, because resources “flow” through the entire program, from activities through outcomes. Resource effects do not stop when an activity is performed, at least hopefully. Resources consumed by a program should make possible not only outcomes but also the activities and changes in processes that prompt those outcomes.

COMPLEXITIES AND INDIVIDUAL VARIABILITY IN INDICES OF COST‑EFFECTIVENESS As explained in earlier chapters, traditional cost-­inclusive evaluation has developed a variety of indices for use by decision makers. Among these are possible ratios of cost to effectiveness (and effectiveness to cost), ratios of benefits to costs, differences between benefits and costs (net benefit), and return on investment, including time until the return on an investment exceeds the amount invested, after adjustment for present value.

228 

  Evaluation for the Scientist–Manager–Practitioner

Indices such as these seem to make complex findings simple, sometimes simpler than they actually are. Unfortunately, relationships between costs and outcomes usually are more complex than cost indices suggest. For instance, suppose a funder reads an evaluation report noting that Program A has a lower average cost per unit of outcome than Program B—a mean $400 per drug-free day after treatment in Program A, for instance, versus a mean $500 per drug-free day after treatment in Program B. The difference in the cost-­effectiveness of the two programs seems clear and simple and recommends Program A. But what if costs in both programs vary considerably around their means for drug-free days in terms of different participants’ responsiveness to treatment or in effectiveness over time—so much so that statistical analysis of the cost-­effectiveness ratios find no significant difference in these indices for the two programs? Furthermore, what if one implementation of Program A, in a rural setting for example, costs an average $250 more per drug-free day than another implementation of Program A in an urban setting? Rarely do programs differ consistently from other programs on all measures. Cost-­inclusive indicators such as “cost per outcome” combine the inevitable uncertainty surrounding costs for different participants with the frequent and considerable uncertainty of program outcomes for different participants. Evaluating programs in units that include money, such as dollars, can engender often-­unwarranted confidence in the accuracy of an evaluation in the eyes of too many beholders. Yates (the second author) has been surprised, for example, to see seasoned researchers celebrate finding average costs per participant slightly lower for a favored program than another program, even though their own statistical analyses of costs for different participants showed the differences not to be significant! These researchers, who would never consider programs with nonsignificantly different outcome indicators as “different,” have remarked at how major the cost difference of $5 per participant would become as 1,000 participants participated in a marginally less expensive program! (Of course, given the uncertainties of measurement of costs for different participants, the $5 difference could have been $0 or even $5 in favor of the other program upon resampling and as additional participants enrolled.) To decide with confidence that “cost per outcome” and other indices truly are different between two or more programs, decision makers need to understand and use statistics such as confidence intervals or statistical tests. The latter, of which the t-test and analyses of variance are examples, take into account the variability introduced by differences between participants, between providers, and between programs. These statistics, readily calculated by most spreadsheet apps, as well as the statistics software listed at the end of the chapter, render a difference “significant” only if the dif-

Activity → Process and Process → Outcome Analyses 

 229

ference between, say, programs is considerably greater than the difference caused by participant characteristics or changes in program operations over time or through different seasons. Even costs of programs are not readily compared as single numbers. For human services in particular, costs might seem to be the most readily discernible difference, whereas program activities might seem obtuse and intrusive to assess. Costs “according to whom,” however, can be crucial to consider. As noted earlier, funders, providers, and participants can have radically different perspectives on costs. Adding variability in outcomes to variability in costs can increase the variability of the resulting cost-­outcome indices. In many programs, outcomes often are both numerous and different when measured for different individual participants. Error in measurement of outcomes is partly to blame, but a major source of variability in outcomes is their determination by complex interactions between program activities, participants’ biopsychosocial processes, and participant characteristics such as age, gender, ethnicity, and prior service experience. Also, human service outcomes often only become evident months, years, or decades following program participation (e.g., Head Start), leading to uncertainty as to which activities caused the processes that produced those outcomes. Consider, for example, domestic abuse prevention programs targeted at individuals who, as children, witnessed abuse of one parent by another. Outcomes of such programs may be determined by a host of factors, of which participation in the abuse prevention program is but one. The usefulness of cost-­effectiveness and cost-­benefit indices is that most people seeing the indices gain confidence in the wisdom of decisions they make based on those indices. That prompts them to use and advocate for future use of cost-­inclusive evaluation. We are in favor of that! But it is so important that the cost-­effectiveness and cost-­benefit indices are correct. So, we recommend that indices of cost → outcome relationships be used and be reported with clear caveats about what interpretations of those indices are and are not justified. The following sections describe interpretations that are, and are not, supported by indices common in cost-­inclusive evaluation.

COST‑EFFECTIVENESS AND COST/EFFECTIVENESS (AND EFFECTIVENESS/COST ) RATIOS So if two programs have two different cost/effectiveness ratios, pick the better one, right? Hmmm . . . which one is really better? These concerns go beyond the variability and the appearance of a difference that is not justified by statistical tests.

230 

  Evaluation for the Scientist–Manager–Practitioner

Just having an apparently smaller ratio of costs to effectiveness, or a bigger ratio of effectiveness to cost, does not always mean we should automatically fund or advocate for that program. Viewed critically, cost/ effectiveness ratios (CERs, such as “cost per pound lost”) and effectiveness/ cost ratios (ECRs, such as “21 high school dropouts prevented per $10,000 invested in tutoring programs”) discard two separate pieces of information (cost, effectiveness). In return, those ratios give us something apparently new that includes just enough information to make a decision with some confidence—­and perhaps too little information to make a fully informed, wise decision. For example, if Program Y’s CER is $1,000 per pound lost and kept off for at least a year, and Program Z’s is $100 per pound lost for the same year, Program Z would seem better—­unless one or more of the following conditions exist. „ Program Y and Program Z have sufficient variability between par-

ticipants or over time in either effectiveness, costs, or both that the average CERs or ECRs are not really different—­at least not according to standard statistical tests. „ Program Z is so much more costly than Program Y that Z is unaf-

fordable. „ Program Z is so ineffective that it does not meet minimum standards

for effectiveness and simply does not provide outcomes that justify any outlay of resources (e.g., it only helps participants lose 1 pound, even though it costs just $100 for the entire program). „ Participants potentially addressed by Programs Y and Z are quite

different; for example, Program Y could be designed for severely obese persons with hormonal disorders, whereas Program Z could be designed for persons who have perhaps 20 extra pounds on their frames and who just have not exercised routinely in the past few years. „ Program Y uses medical professionals, including physicians’ assis-

tants, nurses, and a consulting physician, whereas Program Z uses volunteers who have been through the program themselves. „ Program Y uses space leased for a medical practice, but Program Z

uses space rented by the hour in a church basement. It also is rare for programs to differ in just one or two ways that make choices easy. Program Z may be consistently less expensive than Program Y, for example, but Program Z also may generate consistently poorer outcomes than Program Y—when Program Y is small (say, working with 50 or fewer participants per month). When Programs Y and Z are “scaled up”

Activity → Process and Process → Outcome Analyses 

 231

to work with over 100 participants per month, however, the cost difference between them may disappear. Then differences in outcomes could disappear, however, or could even reverse. Some of the above conditional statements also could cause ratios of cost versus effectiveness to vary considerably from community to community. Some communities might be able to supply more volunteers, for example, and some communities could be more likely to have free space available in churches, temples, or community centers. Yates (2020) noted that some of these problems in interpreting and using ratios of cost and effectiveness can be reduced or eliminated by simply graphing cost versus effectiveness. Graphs of program cost (usually on the horizontal axis) versus program outcomes (usually on the vertical axis because outcomes can be thought to be a function of resources—­of costs) preserve information about the absolute values of costs and effectiveness for different programs, as shown in Figure 9.2. Information about costs and outcomes of many programs can be included in a single graph. The relative costs of each program also can be compared visually, as can the relative outcomes. Uncertainty in estimates of program costs and outcomes, or variability in costs and in outcomes, can be expressed in cost-­outcome graphs as well, by extending the common use of vertical “I-bars” to express variability in values of the vertical axis to the use of horizontal “I-bars” to express variability in values of the horizontal axis, as shown for a variety of programs in Figure 9.2. This combination of variance in both costs and outcomes results, on a graph, in an area of uncertainty for each programs’ cost-­outcome relationship, outside of which other programs’ cost-­outcome regions can be deemed “different.” In Figure 9.2, for example, Programs B and C overlap on costs and outcomes, but both seem to differ substantially from both Programs A and D. Several measures of variability can be used to form these cost-­outcome regions. Standard deviation is a common metric of variability with which most human service providers and researchers are familiar. Confidence interval is another statistic commonly used by social scientists. Both provide useful measures of variability. And if costs and outcomes are being estimated rather than measured, as in Yates (1978, 1980c), “low” and “high” estimates can define the limits of uncertainty with “best guesstimates” providing the centers of cost-­outcome regions for a given program. Finally, if information about the costs and outcomes of several programs is available, along with findings on the variability of those costs and outcomes for different participants, different program sites, or different time periods, a clear and literal picture of the relationship between cost and effectiveness may emerge. As shown by the dashed line in Figure 9.2, for instance, a relationship may be found in which more and more has to

232

EVALUATION FOR THE SCIENTIST–MANAGER–PRACTITIONER

FIGURE 9.2 Graph of program outcome as a possible function of program cost with uncertainty regions. The graph shows ranges of participant responsiveness to programs and measurement uncertainty (vertical and horizontal confidence interval bars for out‑ come and cost, respectively, for Program A; shaded areas represent confidence areas for Programs A through E).

be spent to get the same increment in effectiveness. It is possible, of course, for outcomes to be related to costs in other ways. After a certain investment of program resources, for example, entirely new levels of program effectiveness may be realized: a major “step up” in outcomes after crossing a threshold in resources invested in the program, that is, in program costs.

COST-BENEFIT AND BENEFIT/COST RATIOS Ratios of totaled program benefits to totaled program costs can be especially seductive indicators. As noted earlier, benefits and costs are measured in the same, typically monetary, units. Many novice decision makers assume that if a program’s benefits exceed its costs, it should be funded. After all, that’s a good investment, right? Even if the costs are societal resources such as tax dollars and the benefits are savings in tax dollars, who wouldn’t want to save some money? Well, it’s rarely as simple as that. Often, most or all programs can show that they could have greater ben-

Activity → Process and Process → Outcome Analyses

233

efits than costs. This is especially likely if future savings in tax dollars are assumed for a long enough period, even if the savings are adjusted downward for the lesser value of more delayed savings. What one typically finds is that there are not enough resources available at one time to fund all programs that promise to return more to the societal “tax till” than has been “borrowed” from that till. If resources for programs are limited in the period preceding the time at which those programs will deliver benefits that exceed costs with certainty (and when are they not?), one could simply fund those programs with the best ratios of benefits to costs. That, however, runs into all the problems identified earlier in the chapter for cost-effectiveness indices. As with indices of costs and effectiveness, graphing costs and benefits can resolve some of the problems created by just using ratios of benefits to costs. A graph of costs versus benefits also can include a simple indicator of when a program’s benefits exceed its costs: a dashed line on the graph showing where benefits equal costs, as shown in Figure 9.3. Using areas surrounding the average benefit and cost to show variability for different programs’ benefits and costs, decision makers can simply find those programs whose

FIGURE 9.3 Graph of program benefits and program costs with uncertainty regions (shaded areas). Benefits and costs are measured in same units (e.g., thousands of dol‑ lars). The straight dashed line represents a net benefit of zero (i.e., benefit = cost). The curved dashed line fits different programs to reveal a possible benefit = f(cost) relation‑ ship between programs.

234 

  Evaluation for the Scientist–Manager–Practitioner

uncertainty region is clearly above the benefits = costs line on the program (programs A, C, and B in Figure 9.3). Such graphs can inform interest groups with varying levels of knowledge of statistics about differences in cost-­effectiveness and cost-­benefit for alternative programs. Moreover, trade-offs of greater expense for better outcomes also can be discussed by different interest group representatives in the context of these graphs.

CONCLUSION: REALLY DOING COST‑INCLUSIVE EVALUATION! As we noted at the beginning of this book, evaluation that includes costs of programs, monetary outcomes of programs, or both is evaluation that can make a real difference in the lives of those participating in the program and in the communities in which the program operates. Cost-­inclusive evaluation speaks in a common language of power—money, as well as results—­and therefore is used more often than most forms of evaluation. With that additional power comes additional responsibility. Measurement of the value of resources used by a program and the value of outcomes resulting from a program can be misused more than other forms of evaluation. More interest groups believe they immediately and completely understand findings of cost-­ inclusive evaluation because those findings are expressed in monetary terms—for example, in money spent, saved, generated, or returned. Evaluation findings that are numeric, particularly if they are preceded by a monetary unit such as a dollar or euro sign, gain a degree of believability that can be, in our experience, undeserved. There is a unique danger in evaluations that report findings in universally recognizable units of measurement rather than the more esoteric ones common in other evaluations. In actuality, all evaluations have multiple findings at different levels of specificity, requiring more than quick casual attention to the questions asked, the persons and organizations asking those questions, the persons and organizations that listened and answered, and what those answers mean. As we have attempted to explain in the past several chapters, it is essential to appreciate the nuances and limitations of analyses of cost, cost-­ effectiveness, and cost-­benefit. Evaluation of any type is not just a set of exercises that can be performed, with findings that can be summarized quickly and evaluation sequelae that can be dismissed as beyond the purview of the evaluator. Moreover, if the measures of costs and outcomes that went into a cost-­ inclusive evaluation are “garbage,” surely the conclusions coming from that

Activity → Process and Process → Outcome Analyses 

 235

evaluation will be garbage or worse. The more that estimates are substituted for actual observations, and the more those estimates originate in the mind of the evaluator rather than the experience of program participants and providers, the greater chance that the evaluator’s hypotheses and biases may contaminate the evaluation. If measures of costs or outcomes are biased against certain interest groups, then findings of evaluations using those measures are likely to be biased against those interest groups, too. Monetary units have the potential to multiply measurement and evaluation biases manyfold, again in our experience. Given the special power that cost-­inclusive evaluation can have, it is even more essential that measures of costs and outcomes be chosen carefully and by consensus of interest groups, including those served by a program. “Nothing about me without me” is a refrain of participants that should be heard by all evaluators, including cost-­inclusive ones. Participants, not just program providers and funders, should be key members of cost-­inclusive evaluation teams from beginning to end. Ethics of evaluation, as reflected in statements of guiding principles and values such as those of evaluation associations (e.g., American Evaluation Association, 2018), should guide every cost-­inclusive evaluation. Here we add our own comments on several prescriptive phrases common in program evaluation. We have found these helpful in our cost-­ inclusive evaluation work; we hope you will, too, as you gain more experience conducting evaluations that, hopefully, do include costs and monetary outcomes. „ Above all, do no harm. If an evaluation risks hurting the people served by a program, that evaluation should change or stop. Something is seriously wrong with the evaluation, program, context, or all of these. „ Evaluation should never aid exploitation of any people, for any reason.  People being exploited may not always report it, but they know it and may need to be encouraged to report it before, during, and after the evaluation. Some approaches to and methods of evaluation also can exploit, however inadvertently, as have approaches to program design and implementation (Gone, 2021). „ Be aware of hypotheses that have become “heartpotheses.” Some

cultures of science and some approaches to evaluation state that hypotheses are completely objective, entirely unbiased questions for evaluation. Most hypotheses actually are, at least in part, grounded in beliefs and desires held closely by evaluators or other interest groups involved in an evaluation. At the very least, evaluators need to become aware of the heartpotheses they hold and the accompanying biases.

236 

  Evaluation for the Scientist–Manager–Practitioner

„ Evaluators are an interest group. Recognizing one’s own biases and other limitations can be a step toward a less biased, less limited evaluation. “I’m an evaluator and I’m here to help,” for example, is a mindset that seems positive and benevolent but actually generates mistrust in many program providers and participants. (Yes, one of us was quite surprised when expressing this sentiment and getting mistrust in return!) Acknowledging how one’s own interests are served by an evaluation provides a more transparent foundation for trust. „ Give respect to get respect.  Many program participants, and not a few program providers and funders, have experienced harm when seeking help in the past. They may feel disrespected by programs and funders that sponsor cost-­ inclusive evaluations. Repeatedly demonstrating profound respect for participants, despite their frank expressions of concern and mistrust, eventually can lead to mutual respect and a far more productive evaluation. „ Being truly humble is difficult but essential.  You have worked very hard to be where you are now; you probably are working even harder to get where you are going. Feeling pride in one’s accomplishments and potential is natural. Feeling that one’s degrees and position give one privilege or indicate that one’s understanding and ideas are superior to others’ also is natural, but almost always damaging. Try to question yourself even more than you question others. Self-evaluation is not only possible but necessary throughout a program evaluation. To not self-­evaluate, continually, can be seen as hypocritical for an evaluator. „ Prepare for adventure.  In our experience, every evaluation is full of surprises; cost-­inclusive evaluation more so. Be ready to receive them openly, positively! With many unexpected twists and turns, “buckle up” is solid advice at the start of any evaluation. Productive conflict can be anticipated; it is important not to take the conflict personally, even when others may try to take it to a personal level. At the other extreme, productive friendships are possible and desirable in evaluations, but they can lead to biased evaluations if they become more intimate. Being a professional evaluator is not about being cold or distant nor too warm or friendly; it is about keeping a certain distance and maintaining certain boundaries with an open mind and heart. „ Be ready for discovery! Program evaluations seldom fail to surprise, often in good ways. In our work, profound insights have pushed themselves into our consciousness sometimes gently, sometimes suddenly and assertively. When one asks about the types and amounts and monetary values of resources used by programs and of resources saved or generated by programs, insights can come fast and furious, and may be entirely differ-

Activity → Process and Process → Outcome Analyses 

 237

ent from what you anticipated. This is one of the very best reasons to make your evaluation cost-­inclusive. For example, following the hypotheses funded for a multisite evaluation that included costs and benefits, Yates began his cost and cost-­effectiveness analysis expecting that one model of providing participant-­operated services (typically referred to as consumer-operated mental health services) would be found more effective, and perhaps less costly, than the other two models. Instead, it was the way in which those services were delivered—­the delivery system—that determined effectiveness somewhat and costs very much (Yates, 2010; Yates et al., 2011). The three models of service were largely independent of the delivery systems, costs, and effectiveness and varied considerably between sites using the same basic model for program design and implementation. That finding was difficult for participants and researchers involved in the evaluation to accept. Nevertheless, it provided an insight that continues to guide Yates’s writing and evaluation to this day. We hope you have many such insights as you pursue cost-­inclusive evaluation. Finally, if you are new to cost-­inclusive evaluation, you will likely not understand everything that you read in this book during your first perusal of the material. And you may even feel a little uneasy by the time you conclude the final three sections of this chapter, which emphasized that too-­ simplistic interpretations may be inaccurate. Don’t despair. Keep in mind that learning something new takes time and effort. We all had to crawl before we learned to walk. We fell many times along the way. Similarly, you will likely make some mistakes along the way. But you will also learn from those mistakes and get better and better at performing cost-­inclusive evaluations. And eventually you will be able to move from simple types of cost analyses to more sophisticated types of cost analyses.

SUMMARY Cost-­inclusive evaluation need not restrict itself to modeling and collecting qualitative and quantitative data only on “inputs” (resource types and amounts, valued as costs) and “outputs” (outcome, including effectiveness and benefits). By including in a cost-­inclusive evaluation logic model the activities of program providers and participants and the processes posited to occur within participants, the evaluator creates a more comprehensive model and gains a more complete understanding of the program. Measuring the internal—­ the biopsychosocial—processes that program activities change in participants is relatively easy, as shown in examples provided and via references cited. Many measures have been developed

238 

  Evaluation for the Scientist–Manager–Practitioner

for processes for individual people, couples, families, and even communities and companies. More process-­assessment instruments can be created or adapted from existing measures. Statistical considerations, such as statistical power, and analyses for testing the existence, strength, and direction of relationships between parts of the program logic model are detailed with additional examples. These include resource → activity, activity → process, and process → outcome relationships. A variety of tables for summarizing these relationships qualitatively and quantitatively are described in this mixed methods approach to cost-­inclusive evaluation. A comprehensive example of this RAPOA is provided for a program helping former substance-­abusing individuals transition from treatment to maintenance and self-­management. This example used estimates of each of the above relationships generated in a focus group with program staff during several hours. The result of simple arithmetic operations was isolation of the cost of each program activity and change in each participant process, as well as the cost of achieving each outcome. Finally, a variety of graphic approaches are used to show how cost-­ effectiveness and cost-­ benefit relationships can be analyzed with only descriptive statistical analyses. These methods require only these analyses to show the variability of participant response (outcomes) to program activities. By “keeping it simple,” these graphs allow comparison of different programs by a broad range of interest groups. Discussion of whether a most costly program is “worth it” in terms of superior outcomes can be facilitated by use of these graphs.

STATISTICAL ANALYSIS PROGRAMS AND ASSOCIATED WEBSITES „ G*Power; free. https://pubmed.ncbi.nlm.nih.gov/17695343 „ R; free but with a challenging learning curve. To start, see www.r-

­project.org „ jamovi; possibly less challenging than R but still free. www.jamovi.

org/­features.html „ SAS: www.sas.com/en_us/home.html „ STATA: www.stata.com „ SPSS: www.ibm.com/analytics/spss-­statistics-­software

Activity → Process and Process → Outcome Analyses

239

DISCUSSION QUESTIONS

For this chapter, continue working with the program you chose for the discussion questions of Chapter 8. Again, it should be an entity with which you are familiar, such as a degree program, a health service, a manufacturing company, or a gov‑ ernment service. (1) To complete the logic model of the program, return to Table 9.3—the table that listed “Resources” in separate rows in the leftmost column of the table and “Activities” in separate columns to the right of “Resources.” (Hint: Account‑ ing forms can be useful tables offering many columns and rows. These can be mildly intimidating for some, but if some blank accounting forms are lying around, try using them!) (2) Now skip a column; leave it blank. Next, in the fourth column from the left (perhaps your rightmost column in the table, especially on a letter‑size page or screen), list the major “Outcomes” targeted by the program. (Guess these outcomes if they are not known, although it could be insight‑producing to dis‑ cuss why the outcomes are not clear.) Examples could include graduation, full recovery from infection, a year of sobriety, or percent of citizens paying taxes. If you are including monetary outcomes (benefits), consider making a separate column, even farther to the right, listing those, including increased income for participants, amount of tax paid, and possibly reduced use of social or health services in the future because they no longer are needed by former program participants. Note: If you make estimates, ensure that you document this in your analysis, along with any assumptions made so that readers can be prop‑ erly informed. (3) Now, you and the colleague you involved in delineating resources and activities at the end of Chapter 8 should independently list outcomes you both desire for the program, including possible monetary outcomes, in columns on separate sheets. Next, compare your lists. (4) Different perspectives, such as yours and your colleague’s, may produce the same or markedly different outcomes. Combine your and your colleague’s out‑ come lists in one column and the benefits in another column just to the right of outcomes. (5) While thinking about and discussing outcomes with your colleague, whether monetary or not, you may have wondered whether immediate effects of pro‑ gram activities, such as lessened anxiety or depression, increased studying, improved interpersonal skills, or increased employee engagement, were out‑ comes. Because those probably were not the “end products” or “final out‑ comes” for which the program was funded, they would be processes. (Hint:

240

EVALUATION FOR THE SCIENTIST–MANAGER–PRACTITIONER

Processes usually are internal, not directly observable. Outcomes usually are directly observable, such as changes in behavior, increased earnings, or less‑ ened use of income support.) (6) At this point, you likely have a reasonably complete logic model showing and distinguishing between not only resources and activities, but also processes, outcomes, and perhaps monetary outcomes (benefits). Now you can develop tables for Activity × Process relationships and for Process × Outcome rela‑ tionships as well (even, perhaps, for Outcomes × Benefits relationships). You and your colleague should, separately, fill in the cells in these new tables with estimates the best you can. Don’t think too much about it. Insights usually occur while attempting to make those estimates. Another result of this process often is increased interest in collecting data rather than just using estimates. If so, excellent! That can be the impetus for a round of actual measurement and analysis: the heart of real cost‑inclusive evaluation! Note. This question specified that estimates can be used. When using esti‑ mates or “guesstimates,” keep in mind that credibility is important or your cost analyses will be criticized. Thus you should ensure that you either use estimates from credible experts or, alternatively, collect data that is credible.

LIST OF ACRONYMS

List of Acronyms



ATOD

alcohol, tobacco, and other drugs

CEAC

cost-­effectiveness acceptability curve

CER

cost/effectiveness ratio

CPA

certified public accountant

CPPA

cost-per-­participant approach

DALY

disability-­adjusted life years

DPP

discounted payback period

ECR

effectiveness/cost ratio

GAAP

generally accepted accounting principles

GIGO

garbage in, garbage out

NCF

net cash flow

NNT

number needed to treat

NPV

net present value  241

LIST OF ACRONYMS

PP

payback period

PSA

probabilistic sensitivity analysis

PTSD

posttraumatic stress disorder

QALY

quality-­adjusted life years

QALYG quality-­adjusted life years gained RAPO resources → activities → processes → outcomes RAPOA resources → activities → processes → outcomes analysis RCSC

reliably and clinically significant change

ROI

return on investment

YLL

years of life lost

242 

Glossary

GLOSSARY

Activities are a class of variances in the resources → activities → processes → outcomes model that are observable acts, procedures, or steps in program operations. Specific activities are often standardized in programs, as they have been shown by research to be crucial to achieving desired outcomes via changes in biopsychosocial processes. Balance sheet is a statement that shows the financial position of an entity as of a specific point in time. Box-and-­whisker diagram (sometimes termed a “box plot”) is an image showing, for a sample of given variable, such as cost or outcome measures, the (1) middle (median) value of the variable for that sample (same as the 50th percentile), (2) minimum value, (3) maximum value, (4) 25th percentile of the values sampled (often termed the “first quartile”), (5) 75th percentile (“third quartile”), (6) and (7) “whiskers” showing the values 1.5 interquartile ranges above and below the first and third quartiles, and (8) outliers (values beyond or below the whiskers). The resulting graph is more intuitively understandable than this definition. Break-even point is the point at which you are making neither a profit nor a loss. Capital costs are expenditures on tangible long-lived assets with a life beyond a year. Cash budget is a detailed estimate (forecast, plan) of sources of cash and uses of cash over a specific period.

 243

Client refers to the organization commissioning an evaluation. Confidence interval (often called “error bars” in graphs) is a statistical expression estimating the range (the lower and upper bounds) within which it is nearly certain (at a specific percent level of confidence) that the mean or median of a distribution of values will be found if the variable is measured repeatedly; a 95% confidence interval is the range of values of a sampled variable that should not differ from the mean or median value with a probability greater than 1.00 – .95 = .05.

GLOSSARY

Cost behavior refers to how fixed costs and variable costs react to changes in activity level, output, or volume. Cost-­benefit analysis is an economic appraisal method that uses discounted costs and benefits to derive a benefit-­cost ratio that indicates the amount earned or lost per dollar of expenditure. It permits comparison among programs with widely disparate objectives. Cost driver or activity driver is something that triggers costs. Cost-­effectiveness analysis is an economic appraisal method that examines the costs of programs that are designed to achieve similar or the same outcomes. Cost-­feasibility analysis is used as a screening mechanism to ascertain the feasibility of viable projects that can be conducted with the available budget. Cost-­inclusive evaluation is an evaluation that utilizes economic, financial, and cost and management accounting methodologies to inform decision making. Cost structure refers to the proportion of fixed costs in comparison with variable costs in an organization. Cost-­utility analysis assesses alternatives by comparing costs and utility (as perceived by users) to examine which option yields the greatest utility for a given cost. It uses subjective judgment and intuition, which makes it hard to establish its validity. Cost-­volume-­profit analysis studies the relationship between revenue and expenditure in the short term and how this affects profit.

244 

Direct costs are costs that can be easily traced in a cost-­effective (economically feasible) manner to a specific cost object. Direct services are activities in which providers engage participants in conversation or other interactions. Discounted payback period is the number of years required to recover the original investment using discounted cash flows. Effect size “is any of several measures of association or of the strength of a relation . . . often thought of as a measure of practical significance” (Vogt, 1999, p. 94). Effect sizes can be standardized or unstandardized.

GLOSSARY

Evaluand is a generic term for whatever is being evaluated—­t ypically, a program or system rather than a person. Ex-­offender is an individual who was convicted of a crime but who completed the assigned sentence and probation. Factorial design is a carefully designed plan for data collection and, in some contexts, experimentation that is rooted in social science methods. Factorial designs present every level of every independent variable under consideration but to different individuals, with each individual experiencing just one level of each independent variable. An exception to this is the variable of time of measurement. In some repeated-­measures factorial designs, individual participants’ status on any number of variables may be assessed at multiple times, such as before, during, shortly after, and long after program engagement. Financial statements are the audited records of an organization and comprise the income statement, statement of cash flows, balance sheet, and notes to the financial statements. Fixed costs are costs that remain constant in total regardless of activity level within the relevant range. When expressed on a per-unit basis, this cost becomes smaller with greater activity or volume. Formative evaluation is an evaluation designed to help a program improve its operations, outcomes, costs, or other characteristics. General journal is the book of original entry in the financial accounting system that details all an organization’s financial transactions in chronological order of occurrence.



 245

General ledger is the second book of entry in the financial accounting system. It is an extension of the general journal and provides a summary grouping of the accounts.

GLOSSARY

Iatrogenic is an adjective often applied to program activities that, contrary to program design and provider intent, lead to undesired outcomes, especially participant status that is worse rather than better following engagement in program activities. Impact describes effects of program activities on other programs and in spheres well beyond programs: the community or even society at large. Pragmatically, impact often translates to “what the program funder cares about and funded the program to accomplish.” Impact can imply financial outcomes such as reduced use of and consequent expenditures for health or criminal justice services offered by other programs. We term these monetary outcomes, as we understand that term to be more transparent, more readily understandable, than the distinction between “outcome” and “impact.” Income statement provides a summary of the revenues and expenses for the entity’s reporting period and shows the resultant profit or loss for the same time period. Indirect costs are costs that cannot easily be traced to a specific cost object. Indirect services are activities performed by providers or other program staff, including administrators, without direct engagement of participants; also called “overhead services,” and can be part of “overhead.” Intangible benefits are benefits that cannot be easily measured or quantified. Intangible costs are costs that cannot be easily measured or quantified. Internal rate of return is an economic metric used to estimate the profitability of a proposed initiative. It is the discount rate that makes the present value of a project’s benefits equal to the present value of its costs. Logic model is a diagram of variables that, moving from left to right or top to bottom, are hoped to represent program operations, usually including variables representing context and hoped-for outcomes. Logic models can be used to illustrate the theory of change that guides program activities. 246 

Mixed methods evaluations are program evaluations that include outcomes both qualitative and quantitative. Monetary benefits involve actual cash inflows. Cost-­inclusive evaluations of social, health, and human service programs often lead to increased income for participants and reduced costs to society after program engagement; health, income support, and criminal justice services may be needed less or not at all following program engagement. Monetary costs involve actual cash outflows. These outflows can be from programs, from participants, from communities, and from other interest groups.

Opportunity costs represent the forgone benefit of options not chosen, for example, the value of time, space, or another resource in terms of its best alternative use if the resource had not been used by the program.

GLOSSARY

Net present value is an economic appraisal method that examines the economic feasibility of a proposed investment by subtracting discounted costs from discounted benefits. It is concerned with wealth maximization.

Outcome is a shorter term, observable result of program activities. Changes in processes (or outputs) are the focus of treatment activities, because process changes have been shown to soon produce observable results, such as increased work and engagement with family and friends and decreased drug use. We call those observable results outcomes because they are the results of changes in activities. Realistically, it may be more useful to develop logical models for programs that detail the expected series or chains of causal relationships between resources, activities, processes, and outcomes. Typically, activities yield immediate, short-, intermediate-, and longer-­term processes inside participants. Those processes in turn lead to fairly immediate, short-, intermediate-, and longer-­term outcomes. Output is the initial or short-term results of engagement in program activities. In most human services, these outputs are internal to the participant. For example, specific therapeutic or medicinal activities are expected, according to theory and research, to reduce anxiety, depression, or drug cravings. Because these changes occur inside the participant, human services providers can find “output” confusing but “process” understandable, as the term process often refers to internal-­to-the-­ participant events and changes.



 247

Participant(s) refers to the person(s) receiving or involved with the program services. Payback period is the number of years required to recover the original investment.

GLOSSARY

Processes are biological, psychological, or social (biopsychosocial) events internal to the participant that may be ongoing or episodic, that are targeted by program activities for initiation, modification, or elimination. Qualitative data are often important information for evaluations that cannot be measured on continua but that nevertheless can be essential in evaluation. Qualitative variables can take nominal values (e.g., female, male, and many other categories), status (e.g., graduated or not graduated yet, treated or not treated, ill or healthy), or possibly ordinal (e.g., child, adolescent, adult, elder). Participant observation, archives, artifacts, focus groups, and interviews all can generate valuable qualitative data. Qualitative data often are subjective; that does not mean that they are unimportant! Quality-­adjusted life years (QALY) is the fraction or number of years of life added per participant as a result of a program, adjusted for quality of those added fractions or years of life. Quality-­adjusted life years gained (QALYG) is often used interchangeably with QALY. Quantitative data are what many evaluators in business and human service settings mean when they say or write “data.” Quantitative data can be counts of nominal values, such as numbers of females, males, and other genders. Qualitative variables often are reasonably continuous, such as years of age, antibody counts, years of education, money spent, and money received. Quantitative data can be easy to measure but can vary considerably in importance. “Not everything that can be counted counts” is a common consideration. Ratio analysis is a quantitative financial performance measurement methodology that is used for various types of decision making and for benchmarking against past performance, as well as against the performance of competitors. Recurrent costs or operating costs are costs incurred for everyday operations.

248 

Reliability refers to the internal characteristics of an instrument for measuring resources, activities, processes, or outcomes: The findings are the same when the instrument is used repeatedly at different times or by different evaluators or both (before program participation). Common forms of reliability in measurement are test–­retest (temporal), inter-judge, and inter-item reliability.

GLOSSARY

Resources are the ingredients that make program activities possible. Typically, these include the time and expertise of program providers and practitioners, participant family members, participants themselves, managers, administrators, office and clinic space, participant transportation to and from offices and clinics, computers and other equipment, information services such as Wi-Fi, liability insurance, and more. Resources → Activities → Processes → Outcomes Analysis (RAPOA) is a form of cost-­effectiveness and cost-­benefit analysis that includes information about program operations and checks on initial success of program operations in changing variables that should lead to desired program outcomes. Return on investment (ROI) is a metric used to evaluate financial performance. Self-­efficacy expectancy is a continuous variable reflecting a participant’s belief that he or she can perform a particular activity or achieve a specific goal; an example of an intra-­participant biopsychosocial process that is targeted by specific activities in some programs. Statistical power is the ability of an evaluation design, sample, and measure to detect a difference or relationship between two or more variables. This depends on the number of individuals from whom data are collected on the variables, the ability of the measure to detect differences (a form of validity), and the willingness of the evaluator to err in declaring a difference or relationship that does not actually exist or to miss declaring a difference or relationship that does exist. Summative evaluation is a type of evaluation designed to judge outcomes, costs, or both of a program but is not intended to improve program operations, costs, or outcomes. Sunk costs are defined in economics, accounting, and business as incurred expenses, that is, money already spent that is irrecoverable.



 249

Tangible benefits are benefits that can be converted into monetary units. Tangible costs are resources that can be easily quantified and converted into monetary units. Trial balance is a list of the organization’s accounts. This information is used for preparing the organization’s financial statements.

GLOSSARY

Validity judges the external characteristics of an instrument for measuring resources, activities, processes, or outcomes and for answering questions about meaningfulness of findings. Common among the many types of validity are face, content, construct, discriminant, and predictive criterion. Variable costs vary in total in direct proportion to the level of operational activity.

250 

References

Abdullahi, S. R., Sulaimon, B. A., Mukhtar, I. S., & Musa, M. H. (2017). Cost–­ volume–­profit analysis as a management tool for decision making in small business enterprise within Bayero University, Kano. Journal of Business and Management, 19(2), 40–45. Albrecht, S. W., Stice, E. K., Stice, J. D., & Swain, M. R. (2011). Accounting: Concepts and applications (11th ed.). Mason, OH: South-­Western Cengage Learning. Alnasser, N., Shaban, S. O., & Al-Zubi, Z. (2014). The effect of using break-evenpoint in planning, controlling, and decision making in the industrial Jordanian companies. International Journal of Academic Research in Business and Social Science, 4(5), 626–636. American Evaluation Association. (2018). Guiding principles for evaluators. Retrieved from www.eval.org/About/Guiding- ­Principles. American Psychological Association Presidential Task Force on Evidence-­Based Practice. (2006). Evidence-­based practice in psychology. American Psychologist, 61, 271–285. Armstrong, J. (2010). How effective are minimally trained/experienced volunteer mental health counsellors? Evaluation of CORE outcome data. Counseling and Psychotherapy Research, 10, 22–31. Bandura, A. (1997a). Self-­efficacy: The exercise of control. New York: Freeman/ Times Books/Holt. Bandura, A. (1997b). Self-­efficacy and health behaviour. In A. Baum, S. Newman, J. Wienman, R. West, & C. McManus (Eds.), Cambridge handbook of psychology, health and medicine (pp. 160–162). Cambridge, UK: Cambridge University Press. Bandura, A., Blanchard, E., & Ritter, B. (1969). Relative efficacy of desensitization and modeling approaches for inducing behavioral, affective, and attitudinal changes. Journal of Personality and Social Psychology, 13, 173–199. Basu, A., & Sullivan, S. D. (2017). Toward a hedonic value framework in health care. Value Health, 20, 261–265. Belli, P., Anderson, J. R., Barnum, H. N., Dixon, J. A., & Tan, J. P. (2001).

 251

252 

  References

Economic analysis of investment operations: Analytical tools and practical applications. Washington, DC: World Bank Institute. Retrieved from http://documents1.worldbank.org/curated/en/792771468323717830/ pdf/298210REPLACEMENT.pdf. Berghmans, R., Berg, M., van den Burg, M., & ter Meulen, R. (2004). Ethical issues of cost-­effectiveness analysis and guideline setting in mental health care. Journal of Medical Ethics, 30(2), 146–150. Boadway, R. (2006). Principles of cost-­benefit analysis. Public Policy Review, 2(1), 1–43. Bragg, S. M. (2019). Cost accounting: A decision-­m aking guide (2nd ed.). Centennial, CO: Accounting Tools. Bruce, F. C. M. (2000). On the use of common units of account in cost-­benefit analysis. In Course manual for project planning, monitoring and evaluation. London: University of London, Wye College. Buchanan, H., & MacDonald, W. (2012, November). Anytime, anywhere, evaluation ethics matter! Presentation at the meeting of the American Evaluation Association, Anaheim, CA. Buros Mental Measurement Yearbook. (n.d.). https://buros.org/mental-­ measurements-­yearbook. Retrieved August 12, 2022. Carter Center. (n.d.). Guinea worm eradication program. Retrieved from www. cartercenter.org/health/guinea_worm. Chawla, K. (1990). Social cost-­benefit analysis. New Delhi, India: Mittal. Chelimsky, E. (1997). The political environment of evaluation and what it means for the development of the field. In E. Chelimsky & W. R. Shadish (Eds.), Evaluation for the 21st century: A handbook (pp. 53–68). Thousand Oaks, CA: SAGE. Christie, C. A., & Fleischer, D. N. (2010). Insight into evaluation practice: A content analysis of designs and methods used in evaluations studies published in North American evaluation-­focused journals. American Journal of Evaluation, 31(3), 326–346. Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. Cohen, M. (2009). The valuation of intangible assets: A new approach using hedonic pricing models. Saarbucken, Germany: VDM Verlag. Collins, L. M., Murphy, S. A., & Strecher, V. (2007). The Multiphase Optimization Strategy (MOST) and the Sequential Multiple Assignment Randomized Trial (SMART): New methods for more potent ehealth intervention. American Journal of Prevention Medicine, 32(5, Suppl.), S112–S118. Commonwealth of Australia. (2006, January). Handbook of cost–­benefit analysis. Retrieved from www.fao.org/aghumannutrition/33237-0b38a7524 7f8e69f48a24d7ec850693b2.pd Corporate Finance Institute. (2020). Financial ratios. Retrieved from https:// corporatefinanceinstitute.com/resources/knowledge/finance/financial-­ ratios. Datar, S. M., & Rajan, M. V. (2018). Horngren’s cost accounting: A managerial emphasis (6th ed.). Essex, UK: Pearson Education. Davidson, J. E. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: SAGE. Defoy, S. T. (2011). Performance auditing at the Office of the Auditor General of Canada: Beyond bean counting (Publication No. 2011-71-E). Retrieved from

References 

 253

https://publications.gc.ca/collections/collection_ 2011/bdp-lop/bp/2011-71eng.pdf. DeMuth, N. M., Yates, B. T., & Coates, T. (1984). Psychologists as managers: Old guilts, innovative applications, and pathways to being an effective managerial psychologist. Professional Psychology, 15, 758–768. Department of the Prime Minister and Cabinet, Government of Australia. (2016). Cost benefit analysis guidance note. Retrieved from www.pmc.gov.au/ resource-­centre/regulation/cost-­benefit-­analysis-­guidance-­note. Devine, P. W., Srinivasan, C. A., & Zaman, M. S. (2004). Importance of data in decision making. In M. Anandarajan, A. Anandarajan, & C. A. Srinivasan (Eds.), Business intelligence techniques (pp.  21–39). Berlin & Heidelberg: Springer. Dijkstra, K. A., & Hong, Y. Y. (2019). The feeling of throwing good money after bad: The role of affective reaction in the sunk-cost fallacy. PLOS ONE, 14(1). Retrieved from https://journals.plos.org/plosone/article?id=10.1371/journal. pone.0209900. Dixon, J. A., Scura, L. F., Carpenter, R. A. & Sherman, P. B. (1994). Economic analysis of environmental impacts. London: Earthscan. Drummond, M. F., Sculpher, M. J., Claxton, K., Stoddart, G. L., & Torrance, G. W. (2015). Methods for the economic evaluation of health care programmers (4th ed.). Oxford, UK: Oxford University Press. Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-­ analysis, and the interpretation of research results. Cambridge, UK: Cambridge University Press. Encyclopedia Britannica Ready Reference. (2003). Costs. Chicago: Encyclopedia Britannica. European Commission. (2014). Guide to cost-­benefit analysis of investment projects: Economic appraisal tool for cohesion policy 2014–2020. Retrieved from https://iwlearn.net/resolveuid/719db343-4025-45dc-9e6f541a2d43e482. Faulin, J., Grasman, S., Juan, A., & Hirsch, P. (2018). Sustainable transportation and smart logistics: Decision-­m aking models and solutions. Amsterdam: Elsevier. Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines (3rd ed.). Boston: Allyn & Bacon. Flyvbjerg, B., Holm, M. S., & Buhl, S. (2002). Underestimating costs in public works projects: Error or lie? APA Journal, 68(3), 279–295. Freed, M. C., Rohan, K. J., & Yates, B. T. (2007). Estimating health utilities and quality adjusted life years in seasonal affective disorder research. Journal of Affective Disorders, 100, 83–89. French, A. N., Yates, B. T., & Fowles, T. R. (2018). Cost-­effectiveness of parent–­ child interaction therapy in clinics versus homes: Client, provider, administrator, and overall perspectives. Journal of Child and Family Studies, 27, 3329–3344. Fuguitt, D., & Wilcox, S. J. (1999). Cost-­benefit analysis for public sector decision makers. Westport, CT: Quorum Books. Garrison, R. H., Noreen, E. W., & Brewer, P. C. (2017). Managerial accounting (16th ed.). New York: McGraw Hill.

254 

  References

Gilpin, A. (1995). Environmental impact assessment: Cutting edge for the twenty-­ first century. Cambridge, UK: Cambridge University Press. Glick, H. A., Doshi, J. A., Sonnad, S. S., & Polsky, D. (2015). Economic evaluation in clinical trials (2nd ed.). Oxford, UK: Oxford University Press. Gone, J. P. (2021). The (post) colonial predicament in community mental health services for American Indians: Explorations in alter-­Native psy-ence. American Psychologist, 76(9), 1514–1525. Gorman, J. A., McKay, C. E., Yates, B. T., & Fisher, W. H. (2018). Keeping clubhouses open: Toward a roadmap for sustainability. Administration and Policy in Mental Health and Mental Health Services Research, 45, 81–90. Gramlich, E. M. (1981). Benefit-­cost analysis of government programs. Englewood Cliffs, NJ: Prentice Hall. Hatswell, A. J., Bullement, A., Briggs, A., Paulden, M., & Stevenson, M. D. (2018). Probabilistic sensitivity analysis in cost-­effectiveness models: Determining model convergence in cohort models. Pharmacoeconomics, 36(12), 1421–1426. Herman, P. M., Avery, D. J., Schemp, C. S., & Walsh, M. E. (2009). Are cost-­ inclusive evaluations worth the effort? Evaluation and Program Planning, 32, 55–56. H.M. Treasury. (2018). The green book: Central Government guidance on appraisal and evaluation. Retrieved from https://assets.publishing.service. gov.uk/government/uploads/system/uploads/attachment_data/file/685903/ TheGreenBook.pdf. Hollands, F., Pan, Y., & Escueta, M. (2019). What is the potential for applying cost-­utility analysis to facilitate evidence-­based decision making in schools? Educational Researcher, 48(5), 287–295. Horngren, T. C., Datar, S. M., George, F., Rajan, M., & Ittner, C. (2008). Cost accounting: A managerial emphasis (13th ed.). Essex, UK: Pearson Education. House, E. R. (1993). Professional evaluation: Social impact and political consequences. Newbury Park, CA: SAGE. Hubert, M., & Vandervieren, E. (2008). An adjusted boxplot for skewed distribution. Computational Statistics and Data Analysis, 52(12), 5186–5201. Independent Evaluation Group. (2010). Cost-­benefit analysis in World Bank projects. Washington, DC: International Bank for Reconstruction and Development/World Bank. Retrieved from https://openknowledge. worldbank.org/bitstream/handle/10986/2561/624700PUB0Cost00ox0361 484B0PUBLIC0.pdf?sequence=1&isAllowed=y. Independent Expert Advisory Group. (2014). A world that counts: Mobilising the data revolution for sustainable development. Retrieved from www.­ undatarevolution.org/wp-­content/uploads/2014/11/A-World-That- ­C ounts. pdf. Internal Revenue Service. (2021). Federal tax obligations of non-­profit corporations. Retrieved from www.irs.gov/pub/irs-pdf/f990.pdf. Internal Revenue Service. (n.d.). About form 990, return of organization exempt from income tax. Retrieved from www.irs.gov/forms-pubs/about-form-990. Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.

References 

 255

Jarmolowicz, D. P., Bickel, W. K., Sofis, M. J., Hatz, L. E., & Mueller, E. T. (2016). Sunk costs, psychological symptomology, and help seeking. SpringerPlus, 5, 1699. Kacmarek, C. N., Yates, B. T., Nich, C., & Kiluk, B. D. (2021). A pilot economic evaluation of computerized cognitive behavioral therapy for alcohol use disorder as an addition and alternative to traditional therapy. Alcoholism: Clinical and Experimental Research, 45(5), 1109–1121. Kant, I. (2018). Ethics “nebulous.” South African Dental Journal, 73(9). Available at http://www.scielo.org.za/scielo.php?pid=S001185162018000900012&script=sci_arttext&tlng=es. Kazdin, A. E. (2003). Research design in clinical psychology (4th ed.). Boston: Allyn & Bacon. Kee, J. E. (2004). Cost-­effectiveness and cost-­benefit analysis. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 506–541). San Francisco: Jossey-­Bass. Kind, J. (2001). Accounting and finance for managers. London: Kogan Page. King, J. (2019a). Combining multiple approaches to valuing in the MUVA female economic empowerment program. Evaluation Journal of Australasia, 19(4), 217–225. King, J. (2019b). Evaluation and value for money: Development of an approach using explicit evaluative reasoning [University of Melbourne, Australia, Unpublished doctoral dissertation]. Kinney, M. R., Prather-­K insey, J., & Raiborn, C. A. (2006). Cost accounting: Foundations and evolutions. Mason, OH: Thomson Higher Education. Kraemer, H. C., & Kupfer, D. J. (2006). Size of treatment effects and their importance to clinical research and practice. Biological Psychiatry, 59(11), 990– 996. Levin, H. M. (1983). Cost-­effectiveness in evaluation research. In E. L. Struening & M. B. Brewer (Eds.), Handbook of evaluation research (pp. 345–379). Beverly Hills, CA: SAGE. Levin, H. M., & McEwan, P. J. (2001). Cost-­effectiveness analysis: Methods and applications (2nd ed.). Thousand Oaks, CA: SAGE. Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand, R. (2018). Economic evaluation in education: Cost-­effectiveness and benefit-­cost analysis (3rd ed.). Los Angeles: SAGE. Linfield, K., & Posavac, E. J. (2019). Program evaluation: Methods and case studies (9th ed.). New York: Routledge. Lothian, N., & Small, J. (2003). Accounting. London: Pearson Education. Madsen, L. B., Eddleston, M., Hansen, K. S., & Konradsen, F. (2017). Quality assessment of economic evaluations of suicide and self-harm interventions: A systematic review. Crisis, 39(2), 82–95. Martens, M. P., Smith, A. E., & Murphy, J. G. (2013). The efficacy of single-­ component brief motivational interventions among at-risk college drinkers. Journal of Consulting and Clinical Psychology, 81, 691–701. Mathematica Policy Research. (1982). Evaluation of the economic impact of the Jobs Corps program: Third follow-­up report. Washington, DC: U.S. Department of Labor, Office of Policy Evaluation and Research, Employment and Training Administration. McKay, C. E., Yates, B. T., & Johnsen, M. (2007). Costs of clubhouses: An interna-

256 

  References

tional perspective. Administration and Policy in Mental Health and Mental Health Services Research, 34, 62–72. McKinney, J. B. (2004). Effective financial management in public and nonprofit agencies (3rd ed.). Westport, CT: Praeger. McMillan, D., Gilbody, S., & Richards, D. (2010). Defining successful treatment outcome in depression using the PHQ-9: A comparison of methods. Journal of Affective Disorders, 127(1–3), 122–129. Mertens, D. M. (2010). Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods (3rd ed.). Thousand Oaks, CA: SAGE. Mertens, D. M. (2018). Mixed methods design in evaluation. Thousand Oaks, CA: SAGE. Mitchell, O. (2016). Experimental research design. In W. G. Jennings (Ed.),] Encyclopedia of crime and punishment. Wiley Online Library. Mohr, B. L. (1995). Impact analysis for program evaluation. Thousand Oaks, CA: SAGE. Morris, M. (2005). Ethics. In S. Mathison (Ed.), Encyclopedia of evaluation (pp. 131–134). Thousand Oaks, CA: Sage. Morris, M. (2015). Research on evaluation ethics: Reflections and an agenda. New Directions for Evaluation, 148, 31–42. Nas, T. F. (1996). Cost-­benefit analysis: Theory and application. Thousand Oaks, CA: SAGE. National Archives. (2020). Executive Order 12291. Retrieved from www.archives. gov/federal- ­register/codification/executive-­order/12291.html. Needles, B. E., Powers, M., & Crosson, S. V. (2011). Financial and managerial accounting (9th ed.). Mason, OH: South-­Western Cengage Learning. New Zealand Treasury. (2005). Cost benefit analysis premier. Wellington, New Zealand: Author. New Zealand Treasury. (2015). Guide to social cost benefit analysis. Retrieved from www.treasury.govt.nz/publications/guide/guide-­social-­cost-­benefit-­ analysis. Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (1994). Meeting the need for practical evaluation approaches: An introduction. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (pp. 1–39). San Francisco: Jossey-­Bass. Office of Management and Budget. (1992). OMB Circular A-94 guidelines and discount rates for benefit-­cost analysis of federal programs. Retrieved from www.wbdg.org/FFC/FED/OMB/OMB- ­C ircular-­A 94.pdf. Office of Management and Budget. (n.d.). Circular No. A-122 Revised: Cost principles for non-­ profit organizations. Retrieved from https://georgewbush-­ whitehouse.archives.gov/omb/circulars/a122/a122.html. Olivola, C. Y. (2018). The interpersonal sunk-cost effect. Psychological Science, 29(7), 1072–1083. Organization for Economic Co-­operation and Development. (2017). Development cooperation report 2017: Data for development. Highlights. Retrieved from www.oecd.org/dac/DCR2017 highlights.pdf. Oxford Population Health. (n.d.). Economic evaluation in clinical trials: Supporting materials. Retrieved from www.herc.ox.ac.uk/downloads/economic-­ evaluation-­in-­clinical-­trials.

References 

 257

Pallant, J. (2001). SPSS survival manual. Berkshire, UK: Open University Press. Palmer, S., & Raftery, J. (1999). Opportunity cost. BMJ, 318(5), 1551–1552. Pan American Health Organization. (n.d.). Cost benefit analysis methodology. Retrieved from www.paho.org/disasters/index.php?option=com_ docman&view=download&alias=2178-smart-­hospitals-­toolkit-­ cost-­benefit-­analysis-­cba&category_slug=smart-­hospitals-­ toolkit&Itemid=1179&lang=en. Peräkylä, A. (2004). Reliability and validity in research based on naturally occurring social interaction. In D. Silverman (Ed.), Qualitative research: Theory, method, and practice (2nd ed., pp. 283–304). London: SAGE. Persaud, N. (2007). Conceptual and practical analysis of costs and benefits in evaluation: Developing a cost analysis tool for practical program evaluation. Dissertations, 906. Persaud, N. (2009a). Cost analysis. In C. Wankel (Ed.), Encyclopedia of business in today’s world (Vol. 1, pp. 415). Thousand Oaks, CA: SAGE. Persaud, N. (2009b). Financial statement analysis. In C. Wankel (Ed.), Encyclopedia of business in today’s world (Vol. 2, pp. 419–421). Thousand Oaks, CA: SAGE. Persaud, N. (2009c). Cost structure. In C. Wankel (Ed.), Encyclopedia of business in today’s world (Vol. 1, pp. 676–677). Thousand Oaks, CA: SAGE. Persaud, N. (2011). Discounting. In N. Cohen (Ed.), Green business (pp. 138–141). Thousand Oaks, CA: SAGE. Persaud, N. (2018). A practical framework and model for promoting cost-­inclusive evaluation. Journal of Multidisciplinary Evaluation, 14(30), 88–104. Persaud, N. (2020). Adopting tools from cost and management accounting to improve the manner in which costs in social programs are analyzed and evaluated. Journal of Multidisciplinary Evaluation, 16(34), 1–13. Persaud, N. (2021). Expanding the repertoire of evaluation tools so that evaluation recommendations can assist nonprofits to enhance strategic planning and design of program operations. Evaluation and Program Planning, 89, 1–11. Persaud, N. (in press). Strengthening evaluation culture in the English-speaking Commonwealth Caribbean: A guide for evaluation practitioners and decision-­m akers in the public, private, and NGO sectors. Kingston, Jamaica: Arawak. Persaud, N., & Dagher, R. (2020). Evaluations in the English-­speaking Commonwealth Caribbean region: Lessons from the field. American Journal of Evaluation, 41(2), 255–276. Persaud, N., & Rudy, D. (2008, November 6). Conducting cost-­efficient evaluations that produce credible results. Paper presented at the 22nd Annual Conference of the American Evaluation Association, Denver, CO. Persson, E., & Tinghög, G. (2020). Opportunity cost neglect in public policy. Journal of Economic Behavior and Organization, 170, 301–312. Peterson, H. (2020, October 29). Cost-­benefit analysis (CBA) or the highway? An alternative road to evaluating the value for money of complex government interventions. Paper presented at the virtual annual meeting of the American Evaluation Association. PhenX Toolkit (n.d.), accessed August 11, 2022: Retrieved from www.phenxtoolkit.org/index.php. Pinkerton, S. D., Masotti-­Johnson, A. P., Derse, A., & Layde, P. M. (2002). Ethi-

258 

  References

cal issues in cost-­effectiveness analysis. Evaluation and Program Planning, 25(1), 71–83. Posavac, E. J. (2011). Program evaluation: Methods and case studies (8th ed.). Boston: Prentice-­Hall. Posavac, E. J., & Carey, R. G. (2003). Program evaluation: Methods and case studies (6th ed.). Upper Saddle Road, NJ: Prentice Hall. Rogers, P. J., Hacsi, T. A., Petrosino, A., & Heubner, T. A. (Eds.). (2000). Program theory in evaluation: Challenges and opportunities. San Francisco, CA: Jossey-­Bass. Rogers, P. J., Stevens, K., & Boymal, J. (2009). Qualitative cost-­benefit evaluation of complex, emergent programs. Evaluation and Program Planning, 32, 83–90. Rossi, H. P., Lipsey, W. M., & Freeman, E. H. (2004). Evaluation: A systematic approach (7th ed). Thousand Oaks, CA: SAGE. Roth, S., Robbert, T., & Straus, L. (2015). On the sunk-cost effect in decision-­ making: A meta-­analytic review. Business Research, 8, 99–138. Royce, D., Thyer, B., Padgett, D., & Logan, T. (2001). Program evaluation: An introduction (3rd ed.). Toronto, Ontario, Canada: Wadsworth. Sassone, P. G., & Schaffer, W. A. (1978). Cost-­benefit analysis: A handbook. New York: Academic Press. Sava, F. A., Yates, B. T., Lupu, V., Hatieganu, I., Szentagotai, A., & David, D. (2009). Cost-­effectiveness and cost-­utility of cognitive therapy, rational emotive behavioral therapy, and fluoxetine (Prozac) in treating depression: A randomized clinical trial. Journal of Clinical Psychology, 65, 36–52. Sawilowsky, S. (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods, 8(2), 467–474. Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagné, & M. Scriven, M. (Eds.), Perspectives of curriculum evaluation (AERA Monograph Series on Curriculum Evaluation, No. 1, pp. 39–83). Chicago: Rand McNally. Scriven, M. (1991). Evaluation thesaurus (4th ed.). Thousand Oaks, CA: SAGE. Scriven, M. (2007). Key evaluation checklist. Retrieved from https://wmich.edu/ sites/default/files/attachments/u350/2014/key%20evaluation%20checklist. pdf. Scriven, M. (2015). Key evaluation checklist. Retrieved from http://michaelscriven. info. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-­ experimental designs for generalized causal inference. Boston: Houghton Mifflin. Sheng, Y. M. (2009). Research on selection methods of cost driver. Journal of Modern Accounting and Auditing, 5(9), 47–49. Siegert, F. A., & Yates, B. T. (1980). Cost-­effectiveness of individual in-­office, individual in-home, and group delivery systems for behavioral child management. Evaluation and the Health Professions, 3, 123–152. Sirois, L. P. (2019). The psychology of sunk cost: A classroom experiment. Journal of Economic Education, 50(4), 398–409. Smith, R. D. (2002). Motivating ethical behavior though cost-­benefit analysis. World Business Academy, 16(1), 1–5. Retrieved from https://worldbusiness. org/wp- ­content/uploads/2013/05/pr013002.pdf.

References 

 259

Stake, R. E. (2004). Standards-­based and responsive evaluation. Thousand Oaks, CA: SAGE. Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco: Jossey-­Bass. Taxman, F. S., & Yates, B. T. (2001). Quantitative exploration of Pandora’s box of treatment and supervision: What goes on between costs in and outcomes out. In B. C. Welsh & D. P. Farrington (Eds.), Costs and benefits of preventing crime (pp. 51–84). Boulder, CO: Westview Press. Treasury Board of Canada Secretariat. (1998). Benefit-­ cost analysis guide. Retrieved from www.tbs-sct.gc.ca/rtrap-parfa/analys/analys- ­e ng.pdf. Treasury Board of Canada Secretariat. (2007). Canadian cost-­ benefit analysis guide: Regulatory proposals. Retrieved from www.tbs-sct.canada.ca/rtrapparfa/analys/analys- ­e ng.pdf. TreeAge Pro. (2021). [Software.] Retrieved February 4, 2021, from www.treeage. com. Trenouth, L., Colbourn, T., Fenn, B., Pietzsch, S., Myatt, M., & Puett, C. (2018). The cost of preventing undernutrition: Cost, cost-­ efficiency, and cost-­ effectiveness of three cash-based interventions on nutrition outcomes in Dadu, Pakistan. Health Policy and Planning, 33, 743–754. United Nations Evaluation Group. (2005). Standards for evaluation in the UN system. Retrieved from https://unsdg.un.org/sites/default/files/UNEG-­ Standards-­for-­Evaluation-­in-the-UN-­System-­E NGL.pdf. United Nations Sustainable Development Group. (2016). The common premises cost benefit analysis tool. Retrieved from https://unsdg.un.org/resources/ common-­premises-­cost-­benefit-­analysis-­tool. University of Melbourne Latin American Herald Tribune. (2020). Standard food basket price in Venezuela dwarfs minimum wage. Retrieved from www. sandiegouniontribune.com/en-­e spanol/sdhoy-­standard-­food-­basket-­p ricein-­venezuela-­dwarfs-2014nov06-story.html. U.S. Department of Health and Human Services, Children’s Bureau. (2014). Cost analysis in program evaluation: A guide for child welfare researchers and service providers. Retrieved from www.acf.hhs.gov/cb/training-­ technical-­a ssistance/cost-­analysis-­program-­e valuation-­g uide-child-­welfare-­ researchers. U.S. Government Accountability Office. (2004, July). GAO’s Congressional protocols No. GAO-04-310G. Retrieved from www.gao.gov/assets/gao-04-310g. pdf. U.S. Government Accountability Office. (2009). GAO cost estimating and assessment guide: Best practices for developing and managing capital program costs. Retrieved from www.gao.gov/assets/710/705312.pdf. Venkatachalam, L. (2004). The contingent valuation method: A review. Environmental Impact Assessment Review, 24, 89–124. Vogt, W. P. (1999). Dictionary of statistics and methodology: A nontechnical guide for the social sciences (2nd ed.). Thousand Oaks, CA: SAGE. Wagner, T. H. (2020). Rethinking how we measure costs in implementation research. Journal of General Internal Medicine, 35, 870–874. Weiss, C. H. (1998). Evaluation. Upper Saddle River, NJ: Prentice Hall. Weygandt, J. J., Kieso, D., & Kimmel, P. D. (2016). Financial accounting (10th ed.). Danvers, MA: Wiley.

260 

  References

Weygandt, J. J., Kimmel, P. D., & Kieso, D. E. (2010). Managerial accounting: Tools for business decision making (5th ed.). Hoboken, NJ: Wiley. White, J. L., Albers, C. A., DiPerna, J. C., Elliott, S. N., Kratochwill, T. R., & Roach, A. T. (2005). Cost analysis in educational decision making: Approaches, procedures and case examples (Wisconsin Center for Education Research Working Paper No. 2005-1). Retrieved from www.academia.edu/23655007/Cost_ analysis_in_educational_decision_making_ Approaches_procedures_ and_ case_examples Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of practical program evaluation (3rd ed.). San Francisco: Jossey-­Bass. Williams, A. (1992). Cost-­effectiveness analysis: Is it ethical? Journal of Medical Ethics, 18, 7–11. Wisconsin Department of Children and Families. (2012). Allowable cost policy manual. Madison, WI: Author. World Bank. (1996). Handbook on economic analysis of investment operations. https://sorbonne.pierrekopp.com/downloads/1999%20World%20 Bank%20handbook.pdf. Yates, B. T. (1978). Improving the cost-­effectiveness of obesity programs: Reducing the cost per pound. International Journal of Obesity, 2, 249–266. Yates, B. T. (1979). How to improve, rather than evaluate, cost-­ effectiveness. Counseling Psychologist, 8, 72–75. Yates, B. T. (1980a). Improving effectiveness and reducing costs in mental health. Springfield, IL: Thomas. Yates, B. T. (1980b). Survey comparison of success, morbidity, mortality, fees, and psychological benefits and costs of 3146 patients receiving jejunoileal or gastric bypass. American Journal of Clinical Nutrition, 33, 518–522. Yates, B. T. (1980c). The theory and practice of cost-­utility, cost-­effectiveness, and cost-­benefit analysis in behavioral medicine: Toward delivering more health care for less money. In J. Ferguson & C. B. Taylor (Eds.), The comprehensive handbook of behavioral medicine (Vol. 3; pp. 165–205). New York: SP Medical & Scientific. Yates, B. T. (1994). Toward the incorporation of costs, cost-­effectiveness analysis, and cost-­benefit analysis into clinical research. Journal of Consulting and Clinical Psychology, 62, 729–736. Yates, B. T. (1996). Analyzing costs, procedures, processes, and outcomes in human services. Thousand Oaks, CA: SAGE. Yates, B. T. (1997). Formative evaluation of costs, cost-­effectiveness, and cost-­ benefit: Toward cost → procedure → process → outcome analysis. In L. Bickman & D. Rog (Eds.), Handbook of applied social research methods (pp. 285–314). Thousand Oaks, CA: SAGE. Yates, B. T. (1999). Measuring and improving cost, cost-­effectiveness, and cost-­ benefit for substance abuse treatment programs (NIH Publication No. 99-4518). Rockville, MD: National Institute on Drug Abuse. Available at https://archives.drugabuse.gov/sites/default/files.costs.pdf. Yates, B. T. (2002). Roles for psychological procedures, and psychological processes, in cost-­offset research: Cost → procedure → process → outcome analysis. In N. A. Cummings, W. T. O’Donohue, & K. E. Ferguson (Eds.), The impact of medical cost offset on practice and research: Making it work for you (pp. 91–123). Reno, NV: Context Press.

References 

 261

Yates, B. T. (2009). Cost-­inclusive evaluation: A banquet of approaches for including costs, benefits, and cost-­effectiveness and cost-­benefit analyses in your next evaluation. Evaluation and Program Planning, 32, 52–54. Yates, B. T. (2010). Evaluating costs and benefits of consumer-­operated services: Unexpected resistance, unanticipated insights, and déjà vu all over again. In J. A. Morell (Ed.), Evaluation in the face of uncertainty: Anticipating surprise and responding to the inevitable (pp. 224–230). New York: Guilford Press. Yates, B. T. (2012). Step arounds for common pitfalls when valuing resources used versus resources produced. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation [Special issue]. New Directions in Program Evaluation, 133, 43–52. Yates, B. T. (2020). Research on improving outcomes and reducing costs of psychological interventions: Toward delivering the best to the most for the least. In T. Widiger (Ed.), Annual Review of Clinical Psychology, 16, 125–150. Yates, B. T. (2021). Toward collaborative cost-­inclusive evaluation: Adaptations and transformations for evaluators and economists. Evaluation and Program Planning, 89, 101993. Yates, B. T., Mannix, D., Freed, M. C., Campbell, J., Johnsen, M., Jones, K., & Blyler, C. (2011). Consumer-­operated service programs: Monetary and donated costs and cost-­effectiveness. Psychiatric Rehabilitation Journal, 35(2), 91–99. Yates, B. T., & Marra, M. (2017). Social return on investment (SROI): Problems, solutions . . . and is SROI a good investment? Evaluation and Program Planning, 64, 136–144. Zerbe, R. (2018). A distinction between benefit-­cost analysis and cost benefit analysis moral reasoning and a justification for benefit cost analysis. Retrieved from www.researchgate.net/publication/315815836_ A_ Distinction_between_Benefit- ­C ost_ Analysis_and_Cost_Benefit_ Analysis_ Moral_Reasoning_and_a_ Justification_for_Benefit_Cost_ Analysis.

Author Index

Abdullahi, S. R., 161 Albrecht, S. W., 130, 131, 132, 134, 138, 141, 161, 165 Alnasser, N., 158 Al-Zubi, Z., 157, 158 American Evaluation Association., 235 American Psychological Association., 215 Anderson, J. R., 34 Armstrong, J., 189 Avery, D. J., 9 Bandura, A., 212, 213 Barnum, H. N., 34 Basu, A., 87 Belfield, C., 28 Belli, P., 34, 114 Berg, M., 64 Berghmans, R., 64 Bickel, W. K., 33 Blanchard, E., 212 Boadway, R., 30 Bowden, A. B., 28 Boymal, J., 17 Bragg, S. M., 162 Brewer, P. C., 28 Briggs, A., 101 Bruce, F. C. M., 32, 111 Buchanan, H., 63 Buhl, S., 115 Bullement, A., 101 Buros Mental Measurement Yearbook., 214 Campbell, D. T., 215 Carey, R. G., 40, 99



Carpenter, R. A., 87 Carter Center, 62 Chawla, K., 72 Chelimsky, E., 99 Christie, C. A., 16, 99 Claxton, K., 81 Coates, T., 179 Cohen, J., 217 Cohen, M., 87 Collins, L. M., 215 Commonwealth of Australia, 15 Cook, T. D., 215 Corporate Finance Institute, 141 Crosson, S. V., 128 Dagher, R., 63 Datar, S. M., 152, 162 Davidson, J. E., 51, 99 Defoy, S. T., 15 DeMuth, N. M., 179 Department of the Prime Minister and Cabinet, Government of Australia, 72 Derse, A., 64 Devine, P. W., 4 Dijkstra, K. A., 33 Dixon, J. A., 34, 87 Doshi, J. A., 82 Drummond, M. F., 81, 83, 85, 102 Eddleston, M., 99 Ellis, P. D., 41 Encyclopedia Britannica Ready Reference, 5 Escueta, M., 81 European Commission, 15, 72

 263

264  Faulin, J., 87 Fisher, W. H., 190 Fitzpatrick, J. L., 63 Fleischer, D. N., 16, 99 Flyvbjerg, B., 115, 116 Fowles, T. R., 42 Freed, M. C., 82 Freeman, E. H., 34 French, A. N., 42, 103 Fuguitt, D., 87 Garrison, R. H., 28, 124, 131, 132, 143, 151, 159, 162, 166, 170 George, F., 162 Gilbody, S., 109 Gilpin, A., 72, 87, 88 Glick, H. A., 82, 85 Gone, J. P., 235 Gorman, J. A., 190, 216, 217 Gramlich, E. M., 71 Grasman, S., 87 Hacsi, T. A., 186 Hansen, K. S., 99 Hatry, H. P., 21, 60 Hatswell, A. J., 101, 113 Hatz, L. E., 33 Herman, P. M., 9 Heubner, T. A., 186 Hirsch, P., 87 H.M. Treasury, 15, 72, 113, 115 Hollands, F., 81 Holm, M. S., 115 Hong, Y. Y., 33 Horngren, T. C., 162 House, E. R., 63 Hubert, M., 83 Independent Evaluation Group, 112 Internal Revenue Service., 124 Ittner, C., 162 Jacobson, N. S., 42 Jarmolowicz, D. P., 33 Johnsen, M., 216 Juan, A., 87 Kacmarek, C. N., 108 Kant, I., 63 Kazdin, A. E., 195, 215 Kee, J. E., 28, 33, 34, 110 Kieso, D., 123 Kieso, D. E., 33 Kiluk, B. D., 109

  Author Index Kimmel, P. D., 33, 123 Kind, J., 31 King, J., 18 Kinney, M. R., 143 Kissel-VanVoolen, A., 213, 218, 220 Konradsen, F., 99 Kraemer, H. C., 42 Kupfer, D. J., 42 Layde, P. M., 64 Levin, H. M., 6, 28, 80, 81 Linfield, K., 12, 27, 28, 31, 32, 33, 64 Lipsey, W. M., 34 Litjens, P., 30 Lockwood-Dillard, Dorothy., 193, 221 Logan, T., 63 Lothian, N., 31 MacDonald, W., 63 Madsen, L. B., 99 Marra, M., 9 Martens, M. P., 186 Masotti-Johnson, A. P., 64 Mathematica Policy Research, 13 McEwan, P. J., 28, 81 McKay, C. E., 190, 216 McKinney, J. B., 5, 28, 123, 124, 128, 134, 155, 160 McMillan, D., 109 Mertens, D. M., 17, 190 Mitchell, O., 215 Mohr, B. L., 62 Morris, M., 63 Mueller, E. T., 33 Mukhtar, I. S., 161 Murphy, J. G., 186 Murphy, S. A., 215 Musa, M. H., 161 Nas, T. F., 112 National Archives, 72, 97 Needles, B. E., 128 New Zealand Treasury, 6, 15, 33, 34, 50, 62, 76, 86, 111, 112, 114 Newcomer, K. E., 21, 60 Nich, C., 108 Noreen, E. W., 28 Office of Management and Budget, 7, 8, 76, 114 Olivola, C. Y., 33 Organization for Economic Co-operation and Development, 4 Oxford Population Health, 85

Author Index  Padgett, D., 63 Pallant, J., 41 Palmer, S., 31, 32 Pan, Y., 81 Pan American Health Organization, 73 Paulden, M., 101 Peräkylä, A., 195 Persaud, N., 3, 4, 5, 11, 13, 14, 20, 21, 27, 28, 30, 31, 32, 33, 50, 51, 54, 55, 56, 62, 63, 71, 72, 73, 76, 80, 81, 99, 102, 110, 112, 113, 114, 115, 123, 124, 125, 140, 141, 149, 150, 151, 154, 155, 156, 157, 158, 159, 162, 165, 166 Persson, E., 31 Peterson, H., 18 Petrosino, A., 186 PhenX Toolkit, 215 Pinkerton, S. D., 64 Polsky, D., 82 Posavac, E. J., 12, 27, 28, 31, 32, 33, 40, 64, 99, 179, 195 Powers, M., 128 Prather-Kinsey, J., 143 Raftery, J., 31, 32 Raiborn, C. A., 143 Rajan, M., 162 Rajan, M. V., 152 Richards, D., 109 Ritter, B., 212 Robbert, T., 33 Rogers, P. J., 17, 18, 186, 190 Rohan, K. J., 82 Rossi, H. P., 34, 35, 99, 112 Roth, S., 33 Royce, D., 63 Rudy, D., 21 Sanders, J. R., 63 Sassone, P. G., 72 Sava, F. A., 80, 82 Sawilowsky, S., 41 Schaffer, W. A., 72 Schemp, C. S., 9 Scriven, M., 6, 10, 13, 16, 35, 51, 54, 61, 63, 66 Sculpher, M. J., 81 Scura, L. F., 87 Shaban, S. O., 158 Shadish, W. R., 215 Shand, R., 28 Sheng, Y. M., 156 Sherman, P. B., 87 Shinkfield, A. J., 76

 265 Siegert, F. A., 83 Sirois, L. P., 33 Small, J., 31 Smith, A. E., 186 Smith, R. D., 65 Sofis, M. J., 33 Sonnad, S. S., 82 Srinivasan, C. A., 4 Stake, R. E., 99 Stevens, K., 17 Stevenson, M. D., 101 Stice, E. K., 130 Stice, J. D., 130 Stoddart, G. L., 81 Straus, L., 33 Strecher, V., 215 Stufflebeam, D. L., 76 Sulaimon, B. A., 161 Sullivan, S. D., 87 Swain, M. R., 130 Tan, J. P., 34 Taxman, F. S., 211 ter Meulen, R., 64 Thyer, B., 63 Tinghög, G., 31 Torrance, G., 81 Treasury Board of Canada Secretariat, 15, 31, 72 TreeAge Pro, 85 Trenouth, L., 85 Truax, P., 42 United Nations Evaluation Group, 15 United Nations Sustainable Development Group, 72 University of Melbourne Latin American Herald Tribune, 6 U.S. Department of Health and Human Services, Children’s Bureau, 15 U.S. Government Accountability Office, 14, 15 van den Burg, M., 64 Vandervieren, E., 83 Venkatachalam, L., 88 Vogt, W. P., 41, 245 Wagner, T. H., 33 Walsh, M. E., 9 Weiss, C. H., 99 Weygandt, J. J., 33, 123, 124, 128, 130, 131, 132, 140, 141, 143 White, J. L., 110

266  Wholey, J. S., 21, 60 Wilcox, S. J., 87 Williams, A., 64 Wisconsin Department of Children and Families, 7 World Bank, 73 Worthen, B. R., 63

  Author Index Yates, B. T., 9, 13, 15, 26, 37, 39, 42, 64, 65, 82, 83, 99, 106, 108, 179, 182, 186, 190, 193, 195, 202, 211, 212, 213, 216, 218, 220, 221, 224, 226, 228, 231, 237 Zaman, M. S., 4 Zerbe, R., 72

Subject Index

Note. f, n, or t following a page number indicates a figure, note, or a table. Page references in bold indicate glossary entries. Accounts payable, 129 Accountability current use of cost-inclusive evaluations and, 14–15 guidelines on cost analysis and, 15–16, 15t, 24 importance of making costs explicit, 13 overview, 21–22 Accounting records. See also Financial accounting data collection and, 48–49 double counting and, 49–50 limitations of, 62–63 overview, 126–131, 128t, 129t, 130t, 145–146 selecting a cost-analytical methodology, 101 Accounts receivable, 138 Accuracy, 115–116 Acid test ratio, 142t Activities. See also Activity × process matrix; Operating activities; Program operations; Resource × activity matrix; Resources → Activities → Processes → Outcomes Analysis (RAPOA) dealing with unmeasured or unallocated resources and, 205, 206t definition, 243 evaluating activity → process relationships and, 218–219, 219t examining program costs in relation to, 9–11 improving program cost-effectiveness and, 188–190, 189t listing and defining, 183, 184t overview, 179–182, 180f, 181f, 207–208, 212–213, 238 parameters of, 185–186, 187t, 188f planned versus implemented, 186, 188



reliability and validity of resource → activity findings, 195–197, 196t resource costing and, 197–200, 199t, 201t resource × activity matrix and, 182–183, 184t, 185t summarizing resource → activity findings, 190–195, 191t, 194t valuing resources consumed and, 200, 202–204, 202t, 203t, 204t Activity driver, 155–157, 172. See also Cost driver Activity × process matrix, 221–222, 221t, 223– 225, 224t. See also Activities; Processes; Resources → Activities → Processes → Outcomes Analysis (RAPOA) Administrative expenses, 49–50 Advice/tips for conducting evaluations, 235–237 Allocation method dealing with unallocated resources, 205, 206t direct versus indirect costs and, 29 Apportionment method direct versus indirect costs and, 29–30 double counting and, 50 Assets. See also Balance sheet; Capital assets general journal and, 128–129 overview, 134–140, 135f, 136t, 137t, 138f tangible versus intangible costs and outcomes and, 34 Assumptions cost-volume-profit analysis and, 162, 165 issues to consider in cost-inclusive evaluation and, 66 sensitivity analyses and, 112–114 use of in cost appraisals, 114–115 Availability of data, 101

 267

268  Balance sheet, 130, 130f, 134–140, 135f, 136t, 137t, 138f, 146, 243. See also Assets; Capital assets; Equity; Financial statements; Liabilities Benefit/cost ratio (BCR), 41, 76–77, 77f, 95, 232– 234, 233f. See also Cost-benefit analysis Benefits. See also Cost-benefit analysis; Monetary benefits; Nonmonetary outcomes; Outcome cost-benefit analysis and, 77f cost-benefit and benefit/cost ratios and, 232–234, 233f differences in costs and outcomes and, 40–43 distinguishing effectiveness from, 43–44 ethics and, 63–65 excluding some costs or benefits, 104–107 interpretations of findings and, 107–109, 108f market prices versus shadow prices and, 111–112 outcomes identification model and, 56–57, 58f, 59t, 60t relevant cost analysis and, 165–171, 167t, 168f, 169t, 170t, 171t selecting a cost-analytical methodology, 100–102 traditional economic frameworks, 72–73 Biological processes, 214–215. See also Processes Biopsychosocial processes, 212–215, 237–238. See also Processes Box-and-whisker diagram, 83, 83f, 243 Break-even point cost-volume-profit analysis and, 162, 163, 164f definition, 243 overview, 158–161, 159f, 160f, 161f, 162f, 172, 176 Budget for evaluation. See Evaluation costs Budgets. See also Cash budget cost behavior and, 151 data collection and, 48–49 different interests between stakeholders and, 104 issues to consider in cost-inclusive evaluation and, 65 limitations of, 62–63 perspectives for the study and, 103 selecting a cost-analytical methodology, 101–102, 116–117 Capital assets. See also Assets; Balance sheet; Equity general journal and, 129 outcomes identification model and, 57 overview, 30, 134–135, 136t Capital costs, 30, 44, 243. See also Cost

  Subject Index Capital investments, 140. See also Equity Cash and cash equivalents, 138 Cash budget, 143, 144t, 145, 243. See also Budgets Cash disbursements, 143, 144t Cash flow, 113. See also Statement of cash flows Cash receipts, 143, 144t Change in net assets ratio, 142t Changes in operations, 163, 165 Choices decisions facilitated with cost information and, 12t discount rate choices and, 109–110, 111f selecting a cost-analytical methodology, 100–102 Classification system for costs and outcomes. See also Cost; Outcome cost identification and, 51, 52f, 53–54, 53t double counting and, 50 overview, 26–27, 44–45, 47 Client, 6n, 11, 19t, 21t, 244. See also Stakeholders Committed fixed costs, 158. See also Fixed costs Comparisons importance of making costs explicit, 10, 13 ratio analysis and, 141f Competency overview, 21–22 resistance to including costs in evaluations, 19t training programs, 25 Compound entry, 128, 128t Conceptual cost model, 51. See also Cost identification model Confidence interval (CI), 84, 84f, 108–109, 231, 244 Consumed resources, 200, 202–204, 202t, 203t, 204t. See also Resources Contingent valuation, 87–88, 91 Continuation of program decisions, 12–13, 12t Contract proposals, 17–18 Contribution format income statement, 154–155, 155t, 159, 160f, 161f Contribution margin, 176 Controversial valuations, 66–67. See also Valuation Cost. See also Cost identification model; Cost-benefit analysis; Monetary costs; Nonmonetary program costs; Opportunity costs; Program costs asking costs to whom, 6–7 budgets and accounting records and, 62–63 capital versus recurrent costs, 30 cost-benefit analysis and, 76–77, 77f cost-benefit and benefit/cost ratios and, 232–234, 233f

Subject Index  cost/effectiveness ratios and effectiveness/cost ratios and, 229–232, 232f differences in costs and outcomes, 40–43 direct versus indirect costs, 28–30 distinguishing effectiveness from benefits and, 43–44 evolution and development of cost analysis, 71–72, 97 excluding some costs or benefits, 104–107 financing costs, 39 fixed versus variable costs, 27–28 hidden costs, 40 insurance, 39–40 interpretations of findings and, 107–109, 108f market prices versus shadow prices and, 111–112 micro-, meso-, and macro-level cost assessments and, 60–62 over- and underestimation and, 115–116 overview, 5–6, 26–27, 44–45 psychological costs, 37–40, 39t quantitative versus qualitative costs and outcomes, 36–37, 38f relevant cost analysis and, 165–171, 167t, 168f, 169t, 170t, 171t research costs, 40 selecting a cost-analytical methodology, 100–102 sensitivity analyses and, 112–113 sunk costs, 32–33 tangible versus intangible costs and outcomes, 34 traditional economic frameworks, 72–73 Cost and management accounting. See also Cost behavior break-even analysis and, 158–161, 159f, 160f, 161f, 162f cost or activity drivers and, 155–157 cost structure and, 157–158 cost-volume-profit analysis and, 161–165, 162f, 164t formulas for, 176 issues to consider in cost-inclusive evaluation and, 66 overview, 150–151, 172–173 relevant cost analysis and, 165–171, 167t, 168f, 169t, 170t, 171t relevant range and, 152–155, 153f, 154f, 155t, 156t Cost behavior. See also Cost and management accounting definition, 244 overview, 151–152, 152t, 153f, 172 relevant range and, 152–155, 153f, 154f, 155t, 156t Cost data, 4–5, 48–49. See also Data collection

 269 Cost driver definition, 244 overview, 155–157, 172 relevant range and, 155, 156t sensitivity analyses and, 113 Cost identification model, 51–56, 52f, 53t, 54t, 55t, 67. See also Data collection Cost object, 28–29 Cost per clinically significant change, 41–42 Cost per cure, 41–42 Cost per outcome, 228–229 Cost per participant, 107–108, 108f Cost structure, 157–158, 172, 244 Cost-benefit analysis. See also Benefits; Cost assumptions and, 114–115 cost → outcome relationship and, 229 definition, 244 differences in costs and outcomes, 40–43 evolution and development of cost analysis, 71–72, 97 overview, 11, 76–77, 77f, 88, 89t, 95 Cost-benefit ratios, 232–234, 233f Cost-effectiveness acceptability curves (CEACs), 85 Cost-effectiveness analysis. See also Effectiveness cost → outcome relationship and, 227–229 cost identification model and, 54t cost-benefit and benefit/cost ratios and, 232–234, 233f cost/effectiveness ratios and effectiveness/cost ratios and, 229–232, 232f definition, 244 differences in costs and outcomes, 40–43 distinguishing effectiveness from benefits and, 44 monetary versus nonmonetary costs and outcomes and, 36 overview, 80, 81f, 90t, 91, 96 resource × activity matrix and, 188–190, 189t Cost/effectiveness ratios (CERs), 40–41, 229–232, 232f Cost-feasibility analysis, 81, 90t, 91, 244 Cost-inclusive decision making, 4–5. See also Decision making Cost-inclusive evaluation, 3, 234–238, 244 Cost-per-participant (CPPA) approach, 80, 81f. See also Cost-effectiveness analysis Costs of evaluations. See Evaluation costs Costs to whom, 51, 52f, 67. See also Cost Costs when, 51, 52f, 67. See also Cost Cost-utility analysis, 81–85, 82t, 83f, 84f, 90t, 91, 96, 244 Cost-volume-profit analysis break-even analysis and, 159 definition, 244 overview, 161–165, 162f, 164t, 172

270  COVID-19 cost structure and, 158 decision making and, 4–5, 150 differential analysis and, 165–171, 167t, 168f, 169t, 170t, 171t overview, 22 Cultural concerns, 67 Current assets, 135f, 136t, 137t, 138–139, 138f. See also Assets Current liabilities, 135f, 136t, 137t, 138f, 139. See also Liabilities Current ratio, 142t Data analysis, 63–66 Data collection. See also Cost data; Cost identification model; Outcomes identification model budgets and accounting records and, 62–63 cost analyses and, 17 double counting and, 49–50 ethics and, 63–65 issues to consider, 65–67 micro-, meso-, and macro-level cost assessments and, 60–62 overview, 48–49, 67 selecting a cost-analytical methodology, 101 Data quality and availability, 101 Decision making break-even analysis and, 160 cost → outcome relationship and, 227–229 cost and management accounting and, 150–151 cost-volume-profit analysis and, 162 current use of cost-inclusive evaluations and, 14–16, 15t different interests between stakeholders and, 103 differential analysis and, 165–171, 167t, 168f, 169t, 170t, 171t evolution and development of cost analysis and, 72, 97 financial accounting records and, 126–127 importance of making costs explicit, 11–14, 12f overview, 4–5, 21–22 ratio analysis and, 140–141, 141f resistance to including costs in evaluations, 18–20, 19t traditional economic frameworks, 73 Depreciation, 30, 139 Development of cost analysis, 71–72 Differential analysis. See also Relevant cost analysis keeping or dropping a service, 166–167, 167t, 168f make-or-buy decision, 167–169, 169t

  Subject Index overview, 165–171, 167t, 168f, 169t, 170t, 171t, 173 sell or process further decisions and, 171 special order decisions, 169–170, 170f utilization of a constrained resource decision and, 170–171, 171t Direct costs, 28–30, 38f, 44, 245. See also Cost Direct downstream impactees, 52f Direct services definition, 245 improving program cost-effectiveness and, 189–190, 189t overview, 198, 200, 201t valuing resources consumed and, 203–204, 204t Disability-adjusted life years (DALY), 85, 90t, 91 Discount rate assumptions and, 114–115 impact of on analyses, 109–110, 111f, 117 internal rate of return (IRR) and, 77–79, 78f issues to consider in cost-inclusive evaluation and, 66 overview, 73–75 sensitivity analyses and, 112–113 Discounted payback period (DPP), 79, 79f, 89t, 91, 95, 245 Discounting, 73–75, 74f, 98 Discretionary fixed costs, 158. See also Fixed costs Donated goods and services, 35, 45. See also Volunteered time Double counting, 49–50, 51, 67 Double-entry form of accounting, 128, 128t, 129t. See also Accounting records Dropping a service decision, 166–167, 167t Economic appraisal methods advantages and disadvantages, 88, 89t–90t cost-benefit analysis, 76–77, 77f, 88, 89t cost-effectiveness analysis, 80, 81f, 90t, 91 cost-feasibility analysis, 81, 90t, 91 cost-utility analysis, 81–85, 82t, 83f, 84f, 90t, 91 different interests between stakeholders and, 103–104 discount rate choices and, 109–110, 111f evolution and development of cost analysis, 71–72, 97 excluding some costs or benefits, 104–107 Executive Order 12291, 72, 97 formulas for, 95–96 internal rate of return (IRR), 77–79, 78f, 89t, 91 interpretations of findings and, 107–109, 108f

Subject Index  issues to consider in cost-inclusive evaluation and, 66 market prices versus shadow prices and, 111–112 net present value (NPV), 75–76, 75f, 88, 89t over- and underestimation and, 115–116 overview, 71, 88–91, 89t–90t, 99–100, 116–117, 149 payback period and discounted payback period, 79, 79f, 89t, 91 perspectives for the study and, 102–103 present-value discount table, 98 return on investment (ROI), 86, 86f, 90t, 91 selecting a cost-analytical methodology, 100–102, 116–117 sensitivity analyses and, 112–114 surrogate market valuation methodologies, 86–88, 90t, 91 time preference and discounting, 73–75, 74f, 98 traditional economic frameworks, 72–73 Economic frameworks, 72–73 Effect size, 41, 245 Effectiveness. See also Nonmonetary outcomes; Outcome considering cost in evaluations and, 8–9, 10 cost identification model and, 54–55 cost/effectiveness ratios and effectiveness/cost ratios and, 229–232, 232f differences in costs and outcomes, 40–43 distinguishing from benefits, 43–44 interpretations of findings and, 107–109, 108f outcomes identification model and, 56–57, 58f, 59t, 60t overview, 35, 36 psychological costs and, 37–39, 39t quantitative versus qualitative costs and outcomes, 36–37 Effectiveness/cost ratios (ECRs), 229–232, 232f Efficiency, 8–9, 141f. See also Ratio analysis Efficiency ratios, 141, 142f, 143t Equity, 128, 134–140, 135f, 136t, 138f. See also Balance sheet; Capital assets Estimating total costs for new activity levels, 176 Ethics, 63–65, 67 Evaluand, 3, 73, 245 Evaluation costs different interests between stakeholders and, 104 issues to consider in cost-inclusive evaluation and, 66 overview, 48–49 perspectives for the study and, 103 resistance to including costs in evaluations, 19t

 271 selecting a cost-analytical methodology, 101–102, 116–117 strategies to conduct a cost-inclusive evaluation and, 20, 21t Evaluation questions, 103 Evolution of cost analysis, 71–72 Exchange rate, 113 Exclusion of certain costs or benefits, 104–107 Executive Order 12291, 72, 97 Ex-offender, 223, 245 Expansion of program decisions, 12, 12t Expenditures cash budget and, 143, 144t cost-volume-profit analysis and, 161–165, 162f, 164t general ledger and, 128–129 External pressures, 105 Factorial design, 215, 245 Financial accounting, 123–125, 126–127, 145–146. See also Accounting records Financial statements. See also Accounting records; Balance sheet; Income statement; Statement of cash flows definition, 245 notes to financial statements, 140, 146 overview, 124–125, 130–131, 130f, 145–146 Financing activities, 132, 133t, 143, 144t. See also Statement of cash flows Fiscal accountability, 7–9 Fixed assets, 135f, 136t, 137t, 138, 138f, 139. See also Assets Fixed costs. See also Cost; Program costs cost behavior and, 151, 152t cost structure and, 157–158 definition, 245 overview, 27–28, 44 relevant range and, 153f Formative evaluation, 211–212, 227, 245 Frequency of activities, 185–186, 187t, 188f. See also Activities Funding decisions, 11, 12t Fundraising efficiency ratio, 142t General journal, 128, 128t, 145, 245. See also Accounting records General ledger, 128–129, 129t, 145, 246 Generally accepted accounting principles (GAAP), 124. See also Financial statements Goods, donated. See Donated goods and services Grant proposals, 17–18 Graphs, 231–232, 232f Gross income, 131, 131t. See also Income statement

272  Hedonic pricing, 86–87, 91 History of cost analysis, 71–72, 97 Iatrogenic, 192, 213, 246 Identifying costs. See Cost identification model Impact, 3, 35, 246. See also Outcome Implementation cost identification model and, 52f planned versus implemented activities, 186, 188 sensitivity analyses and, 113 Improvement, 12–13, 12t Income, 143 Income statement, 130f, 131, 131t, 145, 246. See also Financial statements Inconsistencies, 63 Indirect costs. See also Cost definition, 246 overview, 28–30, 38f, 44 valuing resources consumed and, 203–204, 204t Indirect downstream impactees, 52f Indirect services definition, 246 overview, 198–200, 199t, 201t valuing resources consumed and, 203–204, 204t Inflation, 67 Inputs, 237 Intangible benefits, 34, 36, 246 Intangible costs. See also Cost budgets and accounting records and, 62–63 definition, 246 overview, 34, 44 Intensity of activities, 185–186, 187t, 188f. See also Activities Internal rate of return (IRR) definition, 246 interpretations of findings and, 109 overview, 77–79, 78f, 89t, 91, 95 sensitivity analyses and, 113 Interpretation excluding some costs or benefits and, 106 problems with, 107–109, 108f Inventory, 139 Investing activities, 132, 133t. See also Statement of cash flows Investment decisions, 11, 12t Irrelevant costs and benefits, 165–166 Issues to consider in cost-inclusive evaluation, 65–67 Journalizing, 128, 128t Keeping a service decision, 166–167, 167t, 168f

  Subject Index Learning, 12t, 13 Leverage ratios, 141, 142f, 143t. See also Ratio analysis Liabilities, 128–129, 134–140, 135f, 136t, 137t, 138f. See also Balance sheet Liquidity ratios, 141, 142f, 143t. See also Ratio analysis Literature review, 104 Logic model, 179–182, 180f, 181f, 246. See also Resources → Activities → Processes → Outcomes Analysis (RAPOA) Long-term liabilities, 135f, 136t, 137t, 138f, 139. See also Liabilities Loss, 158–159, 159f Macro-level program operation and evaluation, 60–62, 67 Make-or-buy decision, 167–169, 169t Management, 19t. See also Cost and management accounting Market prices, 111–112 Market value ratios, 141, 142f, 143t. See also Ratio analysis Measurement, 19t Meso-level program operation and evaluation, 60–62, 67 Methodologies, 65–67, 100–102, 116–117 Micro-level program operation and evaluation, 60–62, 67. See also Participants Midstream impactees, 52f Minimum cost approach, 80. See also Costeffectiveness analysis Missing data, 63 Mixed methods definition, 247 formative findings and, 227 overview, 17–18 resources → activities → processes → outcomes analysis (RAPOA) and, 221–222, 221t summarizing resource → activity findings, 190–195, 191t, 194t Monetary benefits, 34–36, 247. See also Outcome Monetary costs. See also Cost; Program costs budgets and accounting records and, 62–63 cost identification model and, 52f, 53–54, 53t definition, 247 importance of making costs explicit, 11–14, 12f overview, 34–36, 38f selecting a cost-analytical methodology, 100–101 Monetary outcomes. See also Outcome differences in costs and outcomes and, 43 distinguishing effectiveness from benefits and, 43–44 outcomes identification model and, 56, 58f, 59t, 60t

Subject Index  Net assets, 134–135, 137t, 140. See also Assets; Balance sheet; Equity Net income, 131, 131t. See also Income statement Net present value (NPV) cost-benefit analysis and, 77 definition, 247 discount rate choices and, 110, 111f internal rate of return (IRR) and, 77, 79 overview, 75–76, 75f, 88, 89t, 95 Net working capital ratio, 142t Nominal interest rate, 114n. See also Discount rate Noncash gifts. See Donated goods and services Non-current assets, 135f, 136t, 137t, 138, 138f, 139. See also Assets Non-current liabilities, 135f, 136t, 137t, 138f, 139. See also Liabilities Nonmonetary outcomes, 34–36, 56, 58f, 59t, 60t. See also Effectiveness; Outcome Nonmonetary program costs. See also Cost; Opportunity costs; Program costs budgets and accounting records and, 62–63 cost identification model and, 52f, 53–54, 53t, 55, 67 outcomes identification model and, 67 overview, 13–14, 34–36, 38f Nontangible benefits. See Intangible benefits Nontangible costs. See Intangible costs Notes to financial statements, 140, 146. See also Financial statements Number needed to treat (NNT), 42 Objective techniques, 73 Occurrence of activities, 185–186, 187t, 188f. See also Activities Omissions, 63, 67 Operating activities, 28, 132, 133t, 179–182, 180f, 181f. See also Activities; Program operations; Statement of cash flows Operating efficiency, 141f. See also Efficiency Operating margin ratio, 142t Operating reliance ratio, 142t Operational costs. See also Recurrent costs cost identification model and, 52f relevant range and, 152, 154 Opportunity costs. See also Cost; Nonmonetary program costs budgets and accounting records and, 62 cost identification model and, 53–54 definition, 247 overview, 6, 31–32, 44 sunk costs and, 33 volunteered time and, 13–14 Optimization, 155, 156t

 273 Outcome. See also Effectiveness; Monetary outcomes; Process × outcome matrix; Resources → Activities → Processes → Outcomes Analysis (RAPOA) activities and, 180 budgets and accounting records and, 62–63 cost identification model and, 54–55 cost-benefit analysis and, 76–77, 77f cost-benefit and benefit/cost ratios and, 232–234, 233f cost/effectiveness ratios and effectiveness/cost ratios and, 229–232, 232f definition, 247 differences in costs and outcomes, 40–43 distinguishing effectiveness from benefits and, 43–44 ethics and, 63–65 evaluating process → outcome relationships and, 220–221, 220t interpretations of findings and, 107–109, 108f methods for finding processes that foster outcomes and, 215–218, 217f micro-, meso-, and macro-level cost assessments and, 60–62 monetary versus nonmonetary outcomes, 34–36 over- and underestimation and, 115–116 overview, 3, 44–45, 212–213, 237–238 psychological costs and, 37–39, 39t quantitative versus qualitative costs and outcomes, 36–37, 38f resource × activity matrix and, 183 sensitivity analyses and, 112–113 tangible versus intangible costs and outcomes, 34 Outcomes identification model, 56–57, 58f, 59t, 60t, 67 Outputs definition, 247 examining program costs in relation to, 9–11 overview, 3, 237 Overestimation, 67, 115–116 Owners capital, 135f, 138f. See also Capital investments Parameters of activities, 185–186, 187t, 188f. See also Activities Participants. See also Micro-level program operation and evaluation; Stakeholders activities and, 179 break-even analysis and, 159 cost-per-participant (CPPA) approach and, 80, 81f cost-volume-profit analysis and, 163 definition, 248 interpretations of findings and, 107–109, 108f

274  Participants (cont.) overview, 6–7 resistance to including costs in evaluations, 18–20, 19t resource × activity matrix and, 182–183, 184t, 185t Payback period (PP), 79, 79f, 89t, 91, 95, 248 Performance, 140 Perspectives of stakeholders. See also Stakeholders cost identification model and, 54–55 excluding some costs or benefits and, 106 selecting and planning an evaluation and, 102–103, 116–117 Planning perspective for the study and, 102–103 planned versus implemented activities, 186, 188 selecting a cost-analytical methodology, 100–102 strategic planning, 143 Policymakers, 14–15, 24 Political agenda, 104–105 Prepaid expenses, 139 Preparation, 52f Present-value discount table, 98 Prime rates, 109–110. See also Discount rate Probabilistic sensitivity analysis (PSA), 101, 113. See also Sensitivity analysis Process × outcome matrix, 221–222, 221t, 225– 227, 226t. See also Outcome; Processes; Resources → Activities → Processes → Outcomes Analysis (RAPOA) Processes. See also Activity × process matrix; Process × outcome matrix; Resources → Activities → Processes → Outcomes Analysis (RAPOA) definition, 248 evaluating activity → process relationships and, 218–219, 219t evaluating process → outcome relationships and, 220–221, 220t examining program costs in relation to, 9–11 methods for finding processes that foster outcomes and, 215–218, 217f methods of measuring, 214–215 overview, 180, 205, 207–208, 212–214, 237–238 resource × activity matrix and, 183, 184t, 185t Profit and profitability. See also Revenue break-even point and, 158–159, 159f cost-volume-profit analysis and, 161–165, 162f, 164t overview, 5–6, 149 Profitability ratios, 141, 142f, 143t. See also Ratio analysis

  Subject Index Program costs. See also Cost; Monetary costs; Nonmonetary program costs considering in evaluations, 5–9 cost/effectiveness ratios and effectiveness/cost ratios and, 229–232, 232f data collection and, 48–49 double counting and, 49–50 importance of making costs explicit, 11–14, 12f overview, 3, 21–22 resistance to including in evaluations, 18–20, 19t Program evaluation, 3, 20, 21t Program managers, 4 Program operations, 179–182, 180f, 181f, 186, 188. See also Activities; Operating activities Program theory, 186 Proposals for funding, 183 Psychological costs, 37–39, 39t. See also Cost Psychological processes, 214–215. See also Processes Qualitative costs and outcomes. See also Cost; Outcome; Qualitative data and methods cost identification model and, 52f, 67 outcomes identification model and, 58f, 59t, 60t, 67 overview, 36–37, 38f, 44–45 Qualitative data and methods. See also Qualitative costs and outcomes definition, 17, 248 evaluating activity → process relationships and, 218–219, 219t issues to consider in cost-inclusive evaluation and, 66 methods for finding processes that foster outcomes and, 215–217, 217f overview, 17–18 summarizing resource → activity findings, 190–195, 191t, 194t Quality of data, 101 Quality-adjusted life years gained (QALYG) definition, 248 overview, 82–85, 83f, 84f, 90t, 91 selecting a cost-analytical methodology, 101 Quality-adjusted life years (QALY), 81–85, 83f, 84f, 90t, 91, 211, 248 Quantitative costs and outcomes. See also Cost; Outcome; Quantitative data and methods cost identification model and, 52f, 53t, 55, 67 outcomes identification model and, 56–57, 58f, 59t, 60t, 67 overview, 36–37, 38f

Subject Index  Quantitative data and methods. See also Quantitative costs and outcomes definition, 17, 248 issues to consider in cost-inclusive evaluation and, 66 methods for finding processes that foster outcomes and, 217–218 overview, 17–18 summarizing resource → activity findings, 190–195, 191t, 194t RAPOA models. See Resources → Activities → Processes → Outcomes Analysis (RAPOA) Ratio analysis, 140–141, 141f, 142f, 142t–143t, 146, 248 Real interest rate, 114n. See also Discount rate Records, accounting. See Accounting records Recurrent costs, 30, 44, 248. See also Cost; Operational costs Refinement cycle, 186, 188f Regression analysis, 87 Relevant cost analysis, 165–171, 167t, 168f, 169t, 170t, 171t, 173. See also Differential analysis Relevant range, 152–155, 153f, 154f, 155t, 156t, 172 Reliability contingent valuation method and, 88 definition, 249 of resource → activity findings, 195–197, 196t Replication of program decisions decisions facilitated with cost information and, 12t importance of making costs explicit, 12–13 sunk costs and, 33 Reporting guidelines and regulations activities of the program and, 183 considering cost in evaluations and, 7–9 ethics and, 63–65 issues to consider, 65–67 Resistance to analyzing and including program costs in an evaluation data collection and, 49 ethics and, 64 overview, 18–20, 19t, 21–22 Resource costing, 197–200, 199t, 201t Resource × activity matrix. See also Resources → Activities → Processes → Outcomes Analysis (RAPOA) improving program cost-effectiveness and, 188–190, 189t listing and defining activities and, 183, 184f listing and defining resources and, 184–185, 185t mixed methods and, 221–222, 221t

 275 overview, 182–183, 184t, 185t, 208, 222, 223t reliability and validity of resource → activity findings, 195–197, 196t resource costing and, 197–200, 199t, 201t summarizing resource → activity findings, 190–195, 191t, 194t Resources. See also Resource × activity matrix; Resources → Activities → Processes → Outcomes Analysis (RAPOA) dealing with unmeasured or unallocated resources and, 205, 206t definition, 249 examining program costs in relation to, 10, 37, 38f improving program cost-effectiveness and, 188–190, 189t listing and defining, 184–185 overview, 6, 179, 207–208, 238 parameters of, 185–186, 187t, 188f reliability and validity of resource → activity findings, 195–197, 196t resource costing and, 197–200, 199t, 201t resource × activity matrix and, 182–183, 184–185, 184t, 185t summarizing resource → activity findings, 190–195, 191t, 194t valuing resources consumed, 200, 202–204, 202t, 203t, 204t Resources → Activities → Processes → Outcomes Analysis (RAPOA). See also Activities; Activity × process matrix; Outcome; Process × outcome matrix; Processes; Resource × activity matrix; Resources definition, 249 evaluating activity → process relationships and, 218–219, 219t evaluating process → outcome relationships and, 220–221, 220t formative findings and, 227 methods for finding processes that foster outcomes and, 215–218, 217f mixed methods and, 221–222, 221t overview, 105–106, 180–182, 180f, 181f, 207–208, 212, 238 reliability and validity of resource → activity findings, 195–197, 196t resource costing and, 197–200, 199t, 201t summarizing resource → activity findings, 190–195, 191t, 194t valuing resources consumed, 200, 202–204, 202t, 203t, 204t Retained earnings, 135f, 136t, 138f. See also Equity Return on investment (ROI), 86, 86f, 90t, 91, 249

276  Revealed preferences techniques, 86–88 Revenue. See also Profit and profitability cost-volume-profit analysis and, 161–165, 162f, 164t general journal and, 128–129 Salaries, 49–50 Self-efficacy expectancy, 182, 213, 249 Self-sustainability, 5 Sell or process further decisions, 171 Sensitivity analysis, 101, 112–114, 117 Services. See Direct services; Donated goods and services; Indirect services Shadow prices, 111–112, 113, 117 Shared costs, 29, 50. See also Indirect costs Simple entry, 128, 128t Single-entry form of accounting, 127, 128t. See also Accounting records Social capital costs, 53–54 Social cost-benefit analysis, 76–77, 86. See also Cost-benefit analysis Social processes, 214–215. See also Processes Social services, 14–15 Societal perspective, 102–103. See also Perspectives of stakeholders Space resources, 184–185, 185t. See also Resources Special order decisions, 169–170, 170f Stakeholders. See also Participants cost identification model and, 51, 52f, 54–55 current use of cost-inclusive evaluations and, 14–16, 15t, 24 different interests between, 103–104 distinguishing effectiveness from benefits and, 43 double counting and, 50 ethics and, 63–65 excluding some costs or benefits, 104–107 interpretations of findings and, 107–109, 108f issues to consider in cost-inclusive evaluation and, 65 perspectives and, 54–55, 102–103, 106, 116–117 resistance to including costs in evaluations, 18–20, 19t resource × activity matrix and, 182–183, 184t selecting a cost-analytical methodology, 100, 117 Standard deviations, 108–109, 231 Statement of activities, 131, 145. See also Income statement Statement of cash flows, 130, 130f, 132, 133t, 146. See also Cash flow; Financial statements

  Subject Index Statement of financial position, 134, 146. See also Balance sheet Statements, financial. See Financial statements Statistical analyses and tests differences in costs and outcomes, 40–43, 228–229 interpretations of findings and, 108–109 methods for finding processes that foster outcomes and, 215–217 overview, 238 programs and websites for, 238 Statistical power, 41, 238, 249 Strategic planning, 143 Strategies to conduct a cost-inclusive evaluation, 20, 21t Summative evaluation, 123, 249 Sunk costs. See also Cost budgets and accounting records and, 62 definition, 249 overview, 32–33, 44 Surplus, 161, 162f, 164f Surrogate market valuation methodologies, 86–88, 90t, 91 T-Accounts, 129, 129t Tangible benefits, 34, 250 Tangible costs, 34, 44, 250. See also Cost Termination, 52f Theory of the program, 186 Time frame issues to consider in cost-inclusive evaluation and, 65 selecting a cost-analytical methodology, 101–102 sensitivity analyses and, 113 Time preference, 73–75, 74f, 98, 109 Time value issues to consider in cost-inclusive evaluation and, 65–66 traditional economic frameworks, 72–73 Total assets, 135f, 136t, 137t, 138–139, 138f. See also Assets Total equity, 135f, 138f. See also Equity Total liabilities, 135f, 136t, 137t, 138f, 139–140. See also Liabilities Traditional economic frameworks, 72–73 Training in cost analysis, 19t, 21–22, 25 Transparency considering cost in evaluations and, 7–9 data collection and, 49 importance of making costs explicit, 11–14, 12f overview, 21–22 Travel cost approach, 87, 91

Subject Index  Treasury rates, 109–110. See also Discount rate Treatment plans, 183 Trends, 141f Trial balance, 129–130, 130t, 145, 250. See also Accounting records Type of cost, 51, 52f, 53–54, 67. See also Classification system for costs and outcomes; Cost Unallocated resources, 205, 206t Uncertainty, 112–114 Underestimation, 67, 115–116 Unit costs, 202–203, 203t, 204t Unmeasured resources, 205, 206t Upstream impactees, 52f Utilization of a constrained resource decision, 170–171, 171t Validity budgets and accounting records and, 62 contingent valuation method and, 88

 277 definition, 250 of resource → activity findings, 195–197, 196t Valuation issues to consider in cost-inclusive evaluation and, 66–67 resistance to including costs in evaluations, 19t traditional economic frameworks, 72–73 Value-for-money decisions, 12t, 13, 20, 150–151 Variable costs. See also Cost; Program costs cost behavior and, 151, 152t cost structure and, 157–158 cost-volume-profit analysis and, 163 definition, 250 overview, 28, 42–43, 44 relevant range and, 153f Volunteered time, 13–14, 44–45. See also Donated goods and services; Nonmonetary program costs; Opportunity costs Willingness to pay, 6

About the Authors

Nadini Persaud, PhD, CPA, CGA, is a tenured Lecturer in Evaluation in the Department of Management Studies at the University of the West Indies, Cave Hill Campus, Barbados. She teaches primarily at the graduate level and served as Coordinator of the Master of Science Program in Project Management and Evaluation for 10 years. The recipient of a Fulbright scholarship, Dr. Persaud is an Advisory Council member for the Barbados Chapter of Chartered Professional Accountants Canada and a board member of Caribbean Evaluators International and the Faster Forward Fund. She also serves on several committees of the American Evaluation Association. Her research focuses on blending the tools of project management and accounting to improve the practice of professional evaluation. Dr. Persaud has published numerous articles and book chapters, as well as several books, and has presented over 20 conference papers on the importance of cost analysis in program evaluation. Brian T. Yates, PhD, is Professor in the Department of Psychology at American University in Washington, DC. Working with colleagues in his Program Evaluation Research Lab (PERL) and other universities, Dr. Yates has applied his resources → activities → processes → outcomes analysis (RAPOA) model to measure and improve costs, cost-effectiveness, cost-benefit, and cost-utility of diverse health and human services and their accreditation processes. He has 100 publications, including many peer-reviewed articles, book chapters, a National Institute on Drug Abuse manual on cost-inclusive evaluation, and six books. At American University, Dr. Yates developed the Master’s Program in General Psychology and served as Acting Chair of the Department of Psychology and as Associate Dean of Graduate Studies in the College of Arts and Sciences. He has served as board member and Treasurer of the American Evaluation Association. 278