357 61 511KB
English Pages 24
9 -3 1 0 -0 3 1 REV: JANUARY 5, 2012
ALNOOR EBRAHIM CATHERINE ROSS
The Robin Hood Foundation We try to maximize return on investment where the return is improvement in peoples’ standard of living as measured by earnings, health, and quality of life. — David Saltzman, Executive Director Founded by hedge fund and financial managers in the wake of the stock market crash of 1987, the Robin Hood Foundation (Robin Hood) raised money from the rich and gave it to the poor through grants to nonprofit organizations. It was created with a mission to fight poverty in New York City, an urban Sherwood Forest where in the shade of towering stands of corporate wealth, nearly 20% of the population lived in poverty.1 In 2008, Robin Hood made over 200 grants totaling $137 million to community-based organizations in New York City, and the foundation was nearing $1 billion in cumulative grant-making. Grantees implemented programs in jobs training and economic security, early childhood and youth interventions, and education; they also provided basic and immediate aid, such as housing, food, and healthcare, to those most in need. Two decades after Robin Hood’s founding, New York City once again became the epicenter of a global financial crisis, described by many leading economists as the worst downturn since the Great Depression. For the city’s poor, the peak of the crisis in late 2008 and early 2009 was just the beginning, portending an extended period of job losses, cuts in public services, and a statewide fiscal crisis. Despite the downturn in the hedge fund and financial services industries, Robin Hood’s annual fundraising dinner in May 2009 raised a record-breaking $73.5 million in one night, which would be used to counteract the impacts of the financial crisis on the city’s poor residents. 2 Of this amount, $17.5 million came from ticket sales, $28 million from guests, and another $28 million in matching pledges from the Open Society Institute (OSI), founded by financier George Soros, and the board of the Robin Hood Foundation. Before the event, OSI and the Robin Hood board had each pledged $50 million to match the next $100 million raised from external donors. (See Exhibit 1 for a list of Robin Hood’s board members.) Alone in his spare and modernist Manhattan office in November 2009, Michael Weinstein, senior vice president and director of programs at Robin Hood, marveled at the foundation’s ability to raise that much money in tough economic times. Donors had been moved to contribute to Robin Hood’s work; their trust and generosity offered Robin Hood the chance to make significant and lasting ________________________________________________________________________________________________________________ Professor Alnoor Ebrahim and Research Associate Catherine Ross, Global Research Group, prepared this case. They are grateful to Jeffrey Walker, Entrepreneur-in-Residence at HBS, for his support. Some data in the case have been disguised or modified to protect privacy. HBS cases are developed solely as the basis for class discussion. Cases are not intended to serve as endorsements, sources of primary data, or illustrations of effective or ineffective management. Copyright © 2010, 2012 President and Fellows of Harvard College. To order copies or request permission to reproduce materials, call 1-800-5457685, write Harvard Business School Publishing, Boston, MA 02163, or go to www.hbsp.harvard.edu/educators. This publication may not be digitized, photocopied, or otherwise reproduced, posted, or transmitted, without the permission of Harvard Business School.
This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
differences in the lives of poor people throughout the city during particularly trying days. He had a responsibility to repay that generosity by making the best possible use of the funds. Weinstein had joined Robin Hood in 2002 after being recruited to help deepen and improve the foundation’s ability to identify, measure, and reward outstanding performance of nonprofits in the fight against poverty. Board members were accustomed to measuring and quantifying performance in their day jobs, and they wanted equally tough measures for nonprofits—metrics that could be tracked for each grantee, and that could be compared to other investments to make sure Robin Hood was allocating its dollars most effectively. Since coming on board, Weinstein had led a process of developing a more rigorous, systematic, and metrics-driven approach to identifying and supporting organizations that showed the highest “poverty-fighting” potential. Under his guidance, Robin Hood had developed a benefit-cost (BC) approach to analyzing the performance of program grants. Calculating a BC ratio for each grant allowed the comparison of “apples to oranges” in terms of each grant’s effectiveness in lifting people out of poverty. In 2009, with the basic BC method in place, Weinstein believed that Robin Hood was “in the second inning of a nine-inning game.” The metrics used in the BC ratios were built on a series of assumptions, some more tested than others. Robin Hood continued to seek out academic research that would help it develop deeper, more rigorous, and trustworthy measures. But aside from some exceptional longitudinal studies in early childhood interventions, little research existed that exactly met Robin Hood’s needs. Additionally, the influx of a significant amount of outside funding from OSI raised the stakes on Robin Hood’s promise of effectiveness. The foundation collected large amounts of data from individual grantees about their programs. But what were the cumulative impacts of these grants? How effective was Robin Hood’s approach? And how would the organization know whether it was succeeding in the fight against poverty in New York City?
Background Paul Tudor Jones II, a trader and hedge fund manager, founded the Robin Hood Foundation with four associates in 1988. The previous year, Jones had predicted and nimbly sidestepped the stock market crash of 1987 with a prescience that netted him a gain of over 200%3 that year and made him famous in trading circles. The Robin Hood Foundation was born from Jones’s concern for the impact the economic downturn could have on poor residents of New York City. Jones was not a newcomer to social causes; in the 1980s, he offered to pay the college tuition and fees of every member of a then-sixth-grade Bedford Stuyvesant class who stayed in school, graduated, and got into college. The challenges and setbacks encountered by the students convinced Jones that more sustained commitment, rigorous testing, and rewards for outcomes would be necessary to make real inroads in fighting poverty and neglect. From the outset, Robin Hood relied on large private donations from individuals, many of them Wall Street financial managers. The organization set its sights exclusively on serving New York City’s poor populations, and soon began developing expertise in issues such as education, health, housing, and youth services. David Saltzman, one of the five founders of Robin Hood and a native of Manhattan, was named executive director in 1989. He held a master’s degree in public health from Columbia University and had worked for a number of years in public education and with homeless families, as well as in AIDS education programs for the city’s department of public health and board of education. Under his leadership, the foundation provided financial grants to nonprofit organizations working with poor populations and leveraged its financial support with management, legal, accounting, and real estate assistance. 2 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
Board and staff members considered four elements essential to Robin Hood’s approach. First, funding provided by board members covered all of the foundation’s salaries and administrative costs. This meant that Robin Hood could guarantee that 100% of every donation went directly to grantees’ work fighting poverty. Second, Robin Hood sought to address poverty early on in its cycle, and to that end maintained four program portfolios: Early Childhood and Youth, Education, Jobs and Economic Security, and Survival, which collectively supported programs in healthcare, hunger, housing, and domestic violence. Third, it added value to its program work by providing “management assistance,” which involved helping grantees improve their strategic planning, fundraising, marketing, recruiting of board members or executive-level staff, and other capabilities. Finally, the foundation focused on results, by comparing performance against targets, and by carrying out exhaustive evaluations and analysis of each grantee’s impact on poverty. An outside firm, Philliber Research Associates, was hired to audit and collect data from grantees. After the attacks on the World Trade Center towers on September 11, 2001, Robin Hood took a leading role in raising money and administering its own 9/11 Fund. As Ground Zero still smoldered, Robin Hood organized the “Concert for New York,” an event that tapped the desire of New Yorkers and others around the world to unite and respond. The event filled Madison Square Garden and was broadcast internationally on MTV, raising $30 million for those directly affected by the tragic events in New York City. By the turn of the millennium, Robin Hood had become legendary at raising money. An annual fundraising dinner was considered in certain circles to be the social event of the year. One guest said, “If you are on Wall Street, particularly in hedge funds, you have to be here.” 4 Tickets to the fundraiser sold for thousands of dollars a plate. An “auction” of donated goods and services generally netted millions of dollars in donations as guests seated at tables in an event room waved colored glow sticks to raise the amount of a bid offer. In the mid-2000s, bidders at the event publicly competed with each other to give millions to Robin Hood. Michael Weinstein was hired to help ensure that these funds were spent on the most effective organizations. In 2002, Robin Hood’s annual revenues totaled $62 million, with an annual grants budget of $25 million and a program staff of 14. The board and the executive director were keen to strengthen the organization’s performance metrics as well as its fundraising and communications functions, placing a premium on accountable philanthropy and measurable results. In 2006, Robin Hood hired a new communications director, its first with a strong professional communications background. This emphasis, combined with boom years for Wall Street (home to most of Robin Hood’s major donors) resulted in significant growth. By 2009, Robin Hood was a $150 million operation with a program staff of 35 to 40. After the failure of Lehman Brothers in September 2008, and the worldwide financial crisis, many nonprofit foundations and agencies that depended on external donations experienced deep shortfalls in fundraising. In 2009, concerned about the chilling effect that the economic downturn might have on the willingness of fund managers and others in the financial industry to give large amounts, and to be seen doing it, Robin Hood retired the traditional glow sticks in favor of wireless devices set up at each table that allowed guests to anonymously enter gift amounts. It also reduced the lowestpriced ticket for the 2009 annual dinner to $2,000, down $1,000 from the previous year’s cheapest plate. The adjustments surpassed their intended effect: the number of participants in the auction increased from 3.3% of attendees in 2008 to 71% in 2009. Network anchor Brian Williams, actor Anne Hathaway, and New York Giants quarterback Eli Manning encouraged generosity by stoking friendly competition among each of three sections of the Jacob K. Javits Center room where the event was held. With emcee Jon Stewart of The Daily Show, appearances by the band Black Eyed Peas, 3 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
singer Aretha Franklin, New York City mayor Michael Bloomberg, and others, the May 12, 2009 dinner’s fundraising total of $73.5 million exceeded Robin Hood’s previous record amount of $72 million in 2007. (See Exhibit 2a and 2b for company financials.) A second event that Robin Hood held every year in December was what it called a “Heroes breakfast” during which it honored three or four grantees it that considered heroes. Although Robin Hood filled the room with donors, potential donors, and executive directors, the event was not itself intended as a fundraiser. Honored grantees were given cash awards, and the beneficiaries of their programs told stories of changed lives. Weinstein said, “It is the most inspiring event we do. When you see these people come forward and hear how their lives have been changed by the grantees, it really has an emotional effect.” Weinstein recalled a particularly moving “Heroes breakfast” that honored Mary Montreaux, executive director of Sunshine Homes, a long-time grantee. Sunshine, which provided high-quality permanent supportive housing to formerly homeless individuals and families, routinely met or exceeded its contract goals spelled out in Robin Hood grant agreements. The organization strove to provide not only permanent housing, but also much-needed social services and a sense of community and dignity. At the breakfast, a young woman told of watching her mother die in an emergency shelter. Barely a teenager, she had to care for her four younger siblings. Because the mother had applied to Sunshine Homes before her death, Montreaux had been able to track the children down on the streets and move them and other homeless relatives into permanent housing.
The Metrics Project In 2002, when Weinstein joined the organization, Robin Hood was making less than $30 million in grants annually. Board members and staff shared a sense that although the foundation espoused the ideas of return on investment, due diligence, and performance metrics, their implementation could be considerably strengthened in practice. Weinstein held a PhD in economics from M.I.T., had been chairman of the economics department at Haverford College, had been an economics columnist for the New York Times—and a member of its editorial board—and was president and founder of W.A.D. Financial Counseling, Inc., a nonprofit providing free financial counseling to poor families. At the time, the practice of presenting proposals to the board, which approved all grants, relied more on information about the good work an organization was doing than on metrics related to its results. Weinstein recalled the board’s concerns: Paul in particular, and the board and staff in general, wanted to have confidence that we were spending our money in as smart a way as they do when they invest. When he spoke to me about this job, he laid out the idea that we had a founding rhetoric here of being evidenceand outcomes-based, but there really weren’t any systems in place to get that done. Yes, the organization was collecting numbers and considered itself as rewarding success—and indeed it did—but there wasn’t a very sophisticated framework in place for judging success. No economist’s toolkit had been brought to the job. So my challenge was to move from rhetoric to sophisticated algorithms. In his first board meeting, Weinstein came away with the impression that grant recommendations were “’lite’—more soft music than hard facts and analysis—but still starting from a good rhetorical base.” While advocating for a grantee, one staff person praised the organization’s executive director as an “energetic octogenarian.” Weinstein commented, “In a fuzzy world of do good and feel good, the fact that an ED is an energetic octogenarian is not an irrelevant observation. But in a more tough4 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
minded role where we pay for results and for lifting people out of poverty, that kind of proposition begins to sound lame.” At the same time, a number of staff in the organization brought extensive experience and grounding to the foundation’s work. For example, Emary Aronson, who had organized Robin Hood’s response to 9/11 and ran its Relief Fund, had previously served as director of education initiatives at the New York City Partnership and Chamber of Commerce, where she helped develop a $29 million education reform program. And Susan Epstein, who headed the foundation’s jobs portfolio, was a lawyer who had previously run a large nonprofit domestic violence program and who was highly regarded in the city’s nonprofit sector. Michael Weinstein’s chief priority was to use the tools of economic analysis and measurement to further strengthen Robin Hood’s work. As he took stock of the organization’s ability to tackle this challenge, Weinstein noted that everyone associated with Robin Hood accepted in principle the notion of outcomes- and evidence-based philanthropy. “It would have been a much more difficult task to create those principles. In practice, the organization wasn’t doing anything contradictory to them, we just needed to find the mechanisms to do it well.” The board’s commitment to the process was strong. With hedge fund traders highly represented, the board understood risk, something that Weinstein welcomed. “The board understands that an unblemished record is not a good record at all when you are trying to be creative and push the envelope,” said Weinstein. Other natural advantages were a talented and energetic staff and long-term relationships of trust with nonprofit organizations and poor communities. “We were knee-deep into poor communities,” added Weinstein. “We weren’t coming in from on high and trying to get something done; we had been working with neighborhood groups and had their trust and the trust of their founders. That was crucial.” Weinstein spent a good part of his first year at Robin Hood listening, learning from program staff about the issues each portfolio dealt with, and introducing the notion that decisions about grants should depend on factbased arguments. Weinstein began to analyze the challenges involved in his task. Grantees of Robin Hood set out to fight poverty in different ways and to address different problems. Each measured its progress through information that made sense for the grantee’s particular issues. For example, charter schools measured the percentage of their students who eventually graduated from high school, while jobs training programs tallied the number of jobs lasting more than one year that their programs’ trainees were hired into. Robin Hood needed a deeper reading of what those results actually meant in terms of increased earnings and increased quality of life for individuals—the chief objectives of the organization’s work. How could Robin Hood compare apples to oranges, in order to understand the relative effectiveness of fighting poverty through different kinds of programs? How could it translate the concept of “return on investment” to the nonprofit world, and to the notion of fighting poverty, in a way that would help the foundation evaluate its own impact and that of its grantees? Weinstein recalled a discussion during an off-site staff retreat in which he conveyed the importance of rigor, discipline, candor, and clarity in staff’s assessments of grantees: One staff member expressed criticisms of a grantee’s performance and leadership. And I observed that I hadn’t read any of that criticism in the write-ups, in the recommendations that go to me or the board. She said, “Of course not, because this was a favorite grantee of the board.” I slapped the clipboard on my knee to make sure I had people’s attention, and it split in half. Although I hadn’t intended that degree of force, I did mean to say, “Thou shalt never do that again.” We are professional program analysts; we will not play politics, or pull punches, or shape conclusions to meet some prior notion of what someone else wants. If the board wants to override our professional assessment, that’s fine; that’s why boards exist. Our 5 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
job is to analyze programs to determine how big an impact per dollar they make on the poor of New York City. As an initial step to shift the organization to a more evidence-based approach, Weinstein collaborated with a Geographic Information Systems (GIS) lab at Queens College to produce a poverty map of New York City, overlaying the areas in which Robin Hood funding operated. Most of what the map showed was not news to the organization’s staff; poverty in the city was highly concentrated in three regions: Harlem and the South Bronx, the Lower East Side, and the Queens/Brooklyn border region. (See Exhibit 3 for the poverty map.) At the same time, the map called attention to the lack of Robin Hood grant-making on the Queens/Brooklyn border. Recalled Weinstein: Staff’s initial reaction was that there were few high-quality organizations working in certain places, and without such partners, it would be difficult for us to fight poverty. But it became clear that was not an acceptable answer for an organization with our mission, and it began to frame the conversation about how we approach our work. What do we do when things aren’t easy? Could we maximize the effectiveness of our dollars by spending them where they are most needed?
Benefit Cost Ratios: Comparing Apples to Oranges To ensure the best use of Robin Hood funds, Weinstein’s challenge was to develop a way to compare the relative effectiveness of poverty-fighting programs. In his view, comparing similar programs within a focused area such as jobs training was fairly straightforward. For example, one grantee’s approach to jobs training might result in a greater percentage of job placements, or a higher average salary increase for trainees, than another program. The more difficult question was how to evaluate the success of dissimilar programs, such as jobs training and housing services. To enable such comparison, the centerpiece of Robin Hood’s metrics became the benefit-cost ratio. Weinstein referred to the process of developing BC ratios as the “relentless monetization” of benefits such that they would be comparable across programs. In principle, the BC ratio was calculated as follows: (Poverty-Fighting Benefits of a Program/Costs to Robin Hood) x Robin Hood Factor Poverty-fighting benefits were operationalized in terms of the private benefits that accrued to poor individuals over their lifetimes as a result of a grant.5 In a jobs training program, this numerator might show how much Robin Hood’s grant effectively raised the earnings of job trainees over their lifetimes, compared to their presumed earnings without the training. In the case of improvements in health and general well-being, the numerator estimated the boost to living standards or quality “life years.”a Robin Hood counted only benefits experienced by individuals and left out attempts to quantify societal or systemic benefits. Because the benefits were estimated over the lifetime of the individual, the ratio was calculated with the present discounted value of benefits and costs over time. The denominator of the ratio measured the cost to Robin Hood of the grant; generally this was the size of the grant in dollar terms (since all administrative costs were borne by the board).
a Quality adjusted life years, also referred to as QALYs, were a measure developed by health economists that incorporated an
assessment of quality of life into life-expectancies. Gordon Marshall, ”Quality Adjusted Life Years,” A Dictionary of Sociology, 1998, http://www.encyclopedia.com/doc/1O88-QualityAdjustedLifeYears.html, accessed April 2010.
6 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
The Robin Hood Factor was an estimate of the portion of the benefit that could reasonably be attributed to Robin Hood’s funding. This factor was a guard against inflating the degree to which Robin Hood could take credit for a success. For example, if a grantee lost Robin Hood’s funding, would it reduce its activity only partially? Would it have the capacity to tap alternative funding sources? If so, Robin Hood could not claim that some portion of the program’s benefits would not have occurred in its absence. Robin Hood’s staff and board members used BC ratios to inform decisions about which programs promised the greatest “poverty-fighting” potential. (See Exhibit 4 for an example of a BC ratio calculation. See Exhibit 5 for a comparison of BC ratios in a sample portfolio.) Weinstein was careful to use the BC ratio as a “metaphor,” warning against taking the estimated numbers too seriously, and noting that one needed to exercise judgment with metrics as guideposts. He believed that the most important aspects of the ratio were the disciplined thinking it required about the effects of a program that Robin Hood might support, compared to any other, and the tools the ratio gave program staff to make valid arguments. If one analysis showed a 15:1 ratio versus another that came out as 3:1, those numbers would inform a recommendation on a grant, but would not be the sum total of the recommendation. (See Exhibit 6 for characteristics of Robin Hood metrics.)
Tackling Metrics across the Portfolios Each of the four main portfolios held particular challenges for measuring BC ratios: 1. The Jobs Training and Economic Security portfolio focused on programs that generated income, such as jobs training (for ex-offenders, entry-level workers, and workers needing retraining), microlending, financial literacy, and helping small businesses and entrepreneurs. This portfolio included a subdivision called Single Stop, a chain of centers that helped individuals and families navigate the bureaucracy involved in accessing public benefits like food stamps, Medicaid, tax refunds, and legal assistance. Weinstein asked the staff of the jobs training and economic security portfolio to be the frontrunners in developing more analytical performance metrics. To him, jobs and economic security seemed the “low-hanging fruit,” the most concrete and immediately measureable of the portfolios. Jobs were simpler to quantify and map onto an anti-poverty agenda than more complex interventions such as housing, early childhood programs, or needle exchanges to reduce disease transmission. Generally, jobs programs had the added benefit—for the purposes of measurement—of producing fairly rapid results, compared to a longer time lapse between interventions and outcomes in early childhood education. In addition, Susan Epstein, long-time director of the jobs portfolio, and her staff had already developed some preliminary metrics to measure program effectiveness. Program staff delved into the details of one measure that the team had been tracking: dollars spent per job acquired. “The presumption was that you want this number to be low,” said Weinstein. “But I wasn’t so sure. Maybe you should be spending a lot per job if people keep those jobs longer, or if it pays more, or if the person you helped was harder to help than some other person. If you don’t define what you are trying to accomplish correctly, measurement can be random, or even lead you in the wrong direction.” For example, one grantee ran a jobs training program for daycare workers. Robin Hood had considered the grantee to be very effective, since a high percentage of women it trained were placed in daycare jobs, and the program was not very costly. But on further examination, staff found that the women who entered the program were already earning comparatively high wages, and the increase in their wages after the program was not substantial. Measured over a lifetime of earning capacity, the program’s poverty-fighting power was not as 7 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
meaningful as staff had assumed. The portfolio thus adjusted its measurements to track the number of job placements held by trainees for at least one year, as well as the earnings difference before and after training. Despite these challenges, the BC ratio calculations in the jobs program were straightforward in comparison with those of the other portfolios. 2. The Education portfolio funded charter and non-charter public and private schools from kindergarten through 12th grade, including tutoring, mentoring, after-school programs, literacy, school-based mental health and special education programs, teacher training, and “last chance” high schools for dropouts. A key challenge for measuring success in educational interventions was time. Unlike in jobs, where job placement and wage levels could be determined relatively soon after a training program, several years might elapse before grade school or high school students entered the workforce. Thus, direct data collection was not feasible. To overcome this obstacle, Robin Hood developed a two-stage process for judging outcomes. First, staff identified a set of intermediate outcomes—such as school attendance, standardized test scores, and high school graduation—that could be directly observed at the time of, or shortly after, the program intervention. Second, they searched for studies in academic or educational policy literature that linked those intermediate outcomes to expected earnings or quality of life indicators. For example, Robin Hood staff found evidence that a 10% increase in test scores led to a 4% increase in high school graduation rates. Based on existing research, the earnings impact of high school graduation over high school dropout was estimated at $6,500 in increased income per year.6 But this figure had to be used with caution— because, although there was considerable research correlating educational interventions and earnings, there remained a lack of strong experimental and longitudinal studies that connected specific educational interventions with earnings outcomes. 3. The Early Childhood and Youth portfolio supported programs for infants and toddlers, including home visits to help new mothers, child care training, abused children, and early literacy. It also funded programs for college-bound young adults, young people preparing to leave the foster care system when they turned 18, youth involved in the juvenile justice system, young people considered disconnected or at-risk for imprisonment, and youth re-entering society from prisons. Here, Robin Hood adopted a similar approach to that of the education portfolio. For instance, program staff found that high-quality early childhood programs raised high school graduation rates by approximately 30%.7 Assuming that figure, Robin Hood could then use the same data linking graduation rates to an impact on poverty later in life. 4. The Survival portfolio supported housing programs for homeless and low-income residents, emergency food supplies, health services for HIV positive individuals, services for victims of domestic violence and child abuse, needle exchange programs, and immigrant services. These kinds of services were directed at generating changes in overall well-being or standard of living rather than income. Because the survival portfolio programs were not designed to contribute to increased earnings of clients, the portfolio staff sought other ways of monetizing the program’s benefits on well-being. For example, for a housing program for homeless people, staff members identified the rental value of the housing provided and added research-based estimates of the value of mental health or primary care health services offered through supportive housing programs. Research also showed that placing homeless mentally ill or substance-abusing individuals in supportive housing reduced the need for emergency medical attention by 30%.8 Over time, the quest for valid BC ratios gradually began to have an impact on the overall grant portfolio. For example, Mary Montreaux’s Sunshine Homes, a long-time grantee, had not been
8 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
subjected to the discipline of benefit-cost analysis until 2006 (some years after Montreaux won the Heroes award). When staff undertook the analysis, Sunshine’s BC ratio was estimated to be 2:1, a relatively low ratio by Robin Hood’s standards. This result could be attributed to two factors: the grantee worked with the same population year after year in permanent housing and was thus not benefiting many new clients, and some of its clients were likely to find stable housing even without Robin Hood’s funding. Unlike the jobs training or education programs that served different people each year, permanent housing was an expensive way to fight poverty. “We have great respect for this organization and its leaders,” said Weinstein, “and took years to make sure our numbers were not missing true poverty-fighting value. But although the grantee had not changed, our ways of implementing our mission had, and we just made our last grant to the organization.”
Institutionalizing Performance Measurement In order for rigorous performance measurement to take root in the organization, Weinstein knew that it was not enough to urge or even pressure the portfolio managing directors to begin articulating costs and benefits. They had enough on their plates, and many did not fully grasp what it would take to make evidence-based arguments about supporting, denying, or defunding a nonprofit. Weinstein’s goal was to gradually get his staff to reach a shared understanding of what elements defined a winning argument rather than a frivolous one. He wanted the managing directors and their teams to adopt and internalize the language of benefits and costs: “It is the metaphor, not the arithmetic of benefits and costs, that matters. If you want to make your case, you must put it in the language of marginal benefits and marginal costs.” To institutionalize performance measurement, particularly in the form of benefit cost analysis, Weinstein found that he needed to provide his staff with two critical forms of support: capacity and systems.
Building Capacity To build internal capacity, Weinstein hired Cynthia Esposito Lamy in 2005, an expert in evaluation with a doctorate in education and a professional background in impact measurement. At Robin Hood, she was charged with researching and developing BC ratio-based metrics that would allow the calculation of ratios across portfolios. Lamy reported directly to Weinstein, as did the portfolio managing directors. Her position was not on par with the managing directors, as she did not have formal authority over programs or resources. She worked both on her own and with program staff to develop metrics through relevant literature reviews and through the use of external consultants. (See Exhibit 7 for an organizational chart.) Lamy devoted much of her time to building the foundation for a more organized, standardized, and transparent approach to calculating the benefits of Robin Hood’s grants. She spent months familiarizing herself with the existing literature, both on measuring the internal quality of social interventions and on their ultimate benefits and costs. Reflecting on the period when she arrived at Robin Hood from the field of early childhood development—widely regarded as the leader in social sciences in terms of process and outcomes measurement—Lamy recalled: “I was really surprised to discover that other areas of social science are not yet able to measure the quality of their programs as an internal diagnostic measure.” On the issue of measuring those programs’ social benefits, Lamy explained, “Often I’ve had to find something tangential that was related closely enough that it could be a placeholder, and [then] clearly indicate that it’s a placeholder.”
9 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
Robin Hood initially believed that providing capacity for performance measurement among staff was not enough; grantees also needed support. The vast majority of grantees did not have the expertise or capacity to present measures on their clients. Robin Hood thus began devoting increased resources toward building the capacities of grantee organizations to measure outcomes, as well as to improve their abilities to scale their activities. However, Robin Hood soon concluded that the task was too large. In many cases, the fields in which grantees operated lacked appropriate measurement tools. In others, such tools existed but were fraught with methodological and other problems. To be done correctly, measurement had to be longitudinal, and it required a level of research expertise that was beyond the capacities of grantees—or even of Robin Hood, at times. While scaling back on expectations that grantees would track performance metrics, Robin Hood continued to offer management assistance and support, allocating one full-time staff person to each portfolio specifically for building grantee capacities. Support was available to improve grantees’ financial and accounting systems, governance structures, use of technology, strategic planning, and communications and marketing. Working closely with the managing director of each portfolio, the management assistance unit looked for and developed opportunities to increase impact by working with partners or by contracting and overseeing pro bono or reduced-rate work by outside consultants.
Internal Systems Capacity building, however, either in the form of a dedicated metrics manager or management assistance to grantees, was not enough. There was a need for performance reporting systems—formal and routinized ways for each program to report back to Weinstein and Robin Hood’s board about the performance of their grantees. Known as the “write-ups,” program staff prepared recommendations for the board’s quarterly meetings where decisions were made on each proposed grant. Weinstein set explicit requirements for the write-ups: they were to include not only grant summaries and recommendations, but also explicit quantification of major costs and benefits; they were also to highlight important issues for the board to consider. In other words, the write-ups became the vehicle by which staff conveyed to the board their central case for each grantee, whether the proposal was to augment, decrease, modify, or phase out funding. Beyond the write-ups, which together formed a quarterly “board book” nearly an inch thick, staff developed more detailed documentation in a “grant book.” Program staff had final say over the draft write-ups submitted to Weinstein for comments. Having served on the editorial board of the New York Times, Weinstein was exacting: I care passionately about this monster that everyone on the staff hates, called the write-ups. I care passionately for two reasons. One, because it does no good to be smart if you can’t communicate what you are doing. And, two, the requirement to make your case, make your argument, and make it clear within a disciplined framework is an exercise in hard thinking. I regard it a crime worthy of capital punishment if we haven’t, in the summaries, flagged a problem with the grant. No board member should have to read carefully to find out what’s wrong because we haven’t identified and brought it up for their attention in a clear way. Most write-ups went through multiple drafts before Weinstein was satisfied that the argument on each potential grantee had been clearly and cogently presented. Gradually, the notion of what kinds of arguments were most convincing began to evolve. Write-ups began to move away, in Weinstein’s words, “from relying on adjectives and adverbs to a world in which we are relying on facts and evidence.” Deborah McCoy, managing director for early childhood and youth, recalled:
10 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
When we began, there was a lot of talk about whether we were trying to measure the unmeasurable. But that settled pretty quickly. Michael imposed a Spartan discipline on folks. If you want to make a new grant, it better be a case he gets. And the only way to convey it to him is to make a case of impact. If you don’t, he will say, “This is nice music, but where is the case?” Over the course of a year, Robin Hood’s program staff typically produced over 3,000 pages of analysis for board meetings, with about one-fourth of the grants coming up for approval each quarter. Robin Hood made only annual grants—no multi-year grants—and each one had to be approved at the board level every year. Board committees for each portfolio also met four times a year to discuss grants and program strategies. In addition, each portfolio’s staff developed an overview twice a year, which set out the overall direction for the portfolio, and included generalized conclusions about which strategies had been particularly effective or ineffective.
Perspectives within Robin Hood Despite Weinstein and Lamy’s efforts to institutionalize performance measurement around BC ratios, developing new metrics was still often an ad hoc process. In some instances, Lamy suggested metrics to a portfolio’s managing director based on her review of relevant literature. She provided an example: The early childhood metrics were basically nonexistent when I got here, and I was able to build some from the ground up. The first thing we did was create a pre-K metric, and then I started extrapolating to an early intervention for younger kids, aged zero to three. There is related tangential information in these kinds of studies about parenting, and we began to make a parenting metric. Then I went to the child advocacy literature and looked at outcomes for interventions around juvenile delinquency and child abuse, social service, and social work, and I borrowed from those. At other times, a managing director would take the lead and come to Lamy for help in creating a particular metric. For example, Lamy developed a measure for an English as a second language (ESL) program supported by the survival portfolio to assist immigrants. “We needed a metric to understand monetized benefits. And there is nothing—the research in that area is not yet ready to support benefit-cost analyses,” explained Lamy. So she borrowed estimates from research that had monetized the benefits of a year of education and adapted it as a “placeholder” metric until better research became available. The research suggested that a year of education increased average earnings between 9% and 20%. Taking a low average starting salary of $16,000 per year—which is the estimated earning average for a minority person who is a high school dropout—Lamy assumed that the ESL program would increase that salary by 9%. She knew the estimate was problematic, but given the dearth of research, she chose a conservative value for the anticipated benefits. The managing directors of the portfolios had a range of views on the metrics. Susan Epstein, the head of the jobs portfolio, reflected on the shift that occurred in her group: When we began with the jobs portfolio, at first everyone thought the new method seemed a little bloodless. No one’s enthusiasm is motivated purely by metrics. But that kind of skepticism is OK. The important thing is that the questions we ask now are better than the questions we used to ask. We are better able to use data to make decisions. Steven Lee, who ran the Single Stop program within the jobs portfolio, explained that his program used annual, one-time numbers without including their potential long-term benefits as measures. 11 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
This tended to push the benefits calculation down vis-à-vis other grants. Lee believed that the benefits estimates for Single Stop’s financial counseling programs were inaccurately low. These programs provided financial education to poor people in New York City. The existing metrics estimated the interest earned on bank account deposits that poor people might not have made without the program. He explained: I felt that the number was too low; so we did some research, talked to our financial counseling groups, and then theorized a link between financial education and the prevention of homelessness and the probability of employment. We took some research on homelessness and on employment and tied it into how financial education and support helps decrease homelessness and increase jobs. Then we took a lifetime measure of that. So the annual linear number that we had this year became a much higher, robust number by making a theoretical analogy that we thought was reasonably smart. Eric Weingartner, head of the survival portfolio, saw policy decisions and investment strategy as the most prominent drivers of his portfolio’s work, and found the BC ratios a useful check and balance—a “forced discipline that checks the investment strategy.” He explained, “If we develop a strategy that is critical to impacting poor New Yorkers, almost invariably the BC will support the investment. Since I’ve been at Robin Hood, I’ve never considered an investment where the BC did not end up justifying it.” Weingartner was also seeking ways of building relationships with other actors, especially with local government, and saw the importance of getting involved in policy issues that impacted poverty-fighting investments. Robin Hood tended to stay away from policy work and thus did not have a method for calculating the benefits of policy engagement. Weingartner believed that it was strategic and fruitful to build relationships with city government offices and to engage in policy issues, regardless of their not being reflected in the BC ratios. There was also some skepticism among the program staff. For example, one member commented, “One problem with the metrics is that the theory and approach seem to assume a capital market’s efficiency in the nonprofit sector where that kind of efficiency doesn’t really exist.” Some program staff also believed that some of the numbers reflected in the BC ratios were overvalued. One said, “Given time pressures, I’m not sure program staff members are looking at it carefully. We assume that it is the right value, especially if it’s a high number. We like the fact that it’s a high number, and we may be willing to accept research that is not very strong to support it. When it is undervalued, there is more incentive to look more carefully at it.” At the same time, metrics were just one part of the overall review of grantee performance. Staff knowledge of individual grantees included factors that did not easily lend themselves to measurement, such as expected changes in the quality of a grantee’s leadership. Weinstein added his own pragmatic critique: “One of the points I make all the time is that every benefit-cost ratio we do is wrong; there is not a BC ratio we’ve ever done that is right. But it is an approximation, better than anything else we’ve had or that anyone else has. We’ve assigned dollar values to these benefits. Now the game is to improve them, to do more sophisticated estimates.”
The Metrics “Fear Factor” The use of performance metrics also connected to something deeper within the organization. Weinstein occasionally commented that he would like to see a culture of healthy debate and discussion take root among program staff about metrics and about which grantees to fund. “The culture of the organization has been not to openly criticize anyone else’s work. I wish there were more arguing; to me, there is not enough of it.” Robin Hood’s executive director, (David) Saltzman, who had
12 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
seen the organization evolve over two decades, offered a somewhat different perspective: “We hire really smart people, and they come with many different personalities. Some are more argumentative than others. What matters is that we agree on the main principles and continue to push each other hard.” Weinstein observed that in staff meetings, program staff members did not frequently question another’s program or the way a metric had been constructed. He recognized that a sense of competition among portfolios was a factor. “When we were putting the BC ratios in place,” he said, “the survival portfolio staff members were understandably scared because they thought their numbers would come up short. If you cure poverty with education or jobs training, that can show up big in terms of earning potential in the future, but just giving them food and a roof over their head is not going to look too good.” One staff member called this the “fear factor.” Managing directors who sought support in developing their metrics ran the risk of welcoming scrutiny of their portfolios, potentially making their work look less effective compared to others. Saltzman reflected on this tension in the organization: We’re assessing our investments with measures that are not fully formed. When we first began measuring benefits and costs, staff were indeed afraid that imperfect metrics would lead us to stop funding organizations that we knew were doing great work. How do you reconcile this tension? That’s the real fear factor. It’s a moving target, developing metrics and then checking them against what we know at the time. But we are serious about it. In the early days, there were a couple of program officers who resisted metrics because they were so sure of their own analysis. They ended up leaving, or being asked to leave. Lamy was also wary about the metrics taking on a life of their own: This is a brand-new field, and there’s not enough consensus on these estimates, or even on what goes into them. Yet there’s a certainty about the language of it: I can say, “Oh, that program has a seven-to-one BC ratio.” Well, that sticks in the mind. Policymakers remember that. And it doesn’t matter if you write five more papers about what did or didn’t go into that seven-to-one ratio, and what was or wasn’t estimated correctly—it just doesn’t matter. There’s a power in the simplicity of a benefit-cost ratio. And it’s hard, once they are out there, to argue them. There’s a visceral kind of power in it that can be very scary. At the same time, Robin Hood’s communications and resource development departments liked the BC ratios and wanted to publicize them more to donors to demonstrate that Robin Hood was serious about measurable results.
Impacts on Grantees Robin Hood’s emphasis on performance measurement also impacted relationships with grantee organizations. Most grantees eventually came to accept that if they wanted Robin Hood’s support, they would have to go through a rigorous review every year. Given the demanding process of measurement and annual review, winning a Robin Hood grant came to be seen by many other funders as a seal of approval of a nonprofit organization. A funding relationship with Robin Hood was thus highly desirable in the nonprofit world in New York City because it provided leverage for raising additional funds. Aside from the greater volume of data that grantees were asked to collect and make available to Robin Hood, some grantees were also affected in more fundamental ways. Lee, director of the Single Stop portfolio, explained: 13 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
In the positive sense, our work helps grantees think through how they best serve clients to get the best results for poor people. For example, in the past year we’ve revised how we value getting health insurance for poor people; we’ve significantly increased the value, almost doubled it. I’ve seen the grantees I work closely with hone in more on helping poor people get health insurance. They have not done this in the past, because it’s not easy to do. But now they are doing it because they know that Robin Hood values it more. And they know we value it because we have been convinced that in theory, it is worth more to a poor person to get health insurance than, for example, getting food stamps. I see grantees driving toward meeting Robin Hood numbers. Every year we give them a chart that shows where they rank relative to other portfolio grantees. Grantees look at that chart, and they want to be at the top of it.
Driving Poverty-Fighting with Measurement Since Weinstein’s arrival at Robin Hood, the foundation had become one of the leading organizations implementing a BC ratio approach to performance measurement. The path for developing the metrics approach was clear, and it was laid out in a white paper written by Weinstein, with Lamy’s assistance, called “Measuring Success: How Robin Hood Estimates the Impact of Grants.” Metrics were more than feathers in Robin Hood’s arrows, guiding them towards a target; they were also the tension in the bow string, propelling the arrows forward in the fight against poverty. But what did these arrows or interventions add up to? What constituted high performance for Robin Hood itself? At least three different perspectives coexisted within Robin Hood. Weinstein viewed the BC ratio as a strategy for allocating resources most efficiently. He commented, “We will field all proposals; we will pursue anything we think will produce a favorable benefit-cost ratio where we measure poverty fighting as our numerator and cost as our denominator. That is a theory of change; it is a one-by-one bottoms-up survival-of-the-fittest strategy.” In this way, Robin Hood created something similar to a market for poverty-fighting programs, with the exception of policy or advocacy work. The overall impact of the Robin Hood Foundation could thus be estimated by aggregating the impacts on all individuals affected by its grants. As Weinstein wrote in the white paper: Robin Hood is often asked whether we measure ourselves with the same rigor with which we measure the success of our grantees. In fact, the two measures are identical. We measure our grantees in precisely the same way we measure Robin Hood: by how much povertyfighting good we do with each dollar we spend. Our benefit-cost ratios capture, as best as we know how, Robin Hood’s impact.9 The foundation’s executive director, David Saltzman, explained further: “We average the costbenefit of all the grants we make, and estimate that for every dollar we put out in grants, we are able to improve standards of living for poor people by 18 dollars. And we are thinking about coming up with a smarter poverty standard that would be more accurate as a measure of the true living standards of New Yorkers.” Specifically, Robin Hood was working to develop a measure of poverty that would incorporate health status, a component of living standards absent from official poverty measures. Among the managing directors, many had been hired for their depth of experience in certain sectors, and they brought their own understanding of impacts to bear on their portfolios. McCoy, who had joined Robin Hood in 2005 after several years at the Edna McConnell Clark Foundation, emphasized that early childhood interventions required a long-term approach to understanding 14 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
310-031
impact. “You can’t measure in the near term what kinds of services will lead to the best outcome later in life,” she explained, “and so we feel very compelled to tie our work to the best longitudinal research that’s out there.” Her approach was to work with experts to help identify which early childhood inputs showed evidence of long-term benefits, and then to use that information to develop a theory of change for her portfolio to guide grant-making. Similarly, (Eric) Weingartner’s prior experience in the policy world influenced his strategy for the survival portfolio. In his view, it was important to understand “what the city’s policy is on an issue,” such as domestic violence, and then to support grantees whose demonstrated results could influence how the city would allocate its dollars. “We have the ability to be more expert at what we do in a policy arena, and part of that means making an investment in the whole landscape more strategically.“ Lamy put it another way: “I am actually afraid that if benefit-cost analysis goes to its logical conclusion, at a national level, and with the attention it is getting now, that everyone will want to fund programs with the highest BC ratio. But is that how you fight poverty?” The impact of the global financial crisis was being acutely felt by New York City’s poor. Saltzman provided some of the numbers, “More than 40% of African-American men in New York City don’t have jobs. More than 50% of babies are born into poverty. More than 1.3 million people went to a soup kitchen last year. The number of people who need help is growing.” Weinstein considered the coexistence of these distinct threads and observed the increasingly sophisticated level of discussion about measurement and impact in the organization. The foundation had come a long way since 2002. With a record number of both donors and dollars raised from its 2009 fundraiser, as well as a pledge of $100 million in matching funds for new donations from Soros’ OSI and the board, he was frequently asked about Robin Hood’s performance. In three or five years’ time, how would Robin Hood know it was succeeding?
15 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
Exhibit 1
Robin Hood Foundation Board Members
Alan D. Schwartz, Chair Executive Chairman, Guggenheim Partners
Paul Tudor Jones II, Founder Chairman and CEO, Tudor Investment Corporation
Lee S. Ainslie III, Vice Chair Managing Partner, Maverick Capital Management, LLC
Peter D. Kiernan III Senior Advisor, Cyrus Capital Partners
Tom Brokaw, Vice Chair NBC News
Kenneth G. Langone Chairman and CEO, Invemed Associates, Inc.
Daniel S. Och, Vice Chair Senior Managing Member, Och-Ziff Capital Management Group
Mary McCormick President, Fund for the City of New York
Victoria B. Bjorklund Partner, Simpson Thacher and Bartlett Scott Bommer President, SAB Capital Management Peter F. Borish Chairman and CEO, Computer Trading Corporation Geoffrey Canada President and CEO, Harlem Children's Zone Maurice Chessa Director, Bedford Stuyvesant–I Have a Dream Program Steven A. Cohen Chairman and CEO, S.A.C. Capital Advisors, LLC Glenn Dubin Co-Founder and Managing Member, Highbridge Capital Management Marian Wright Edelman President, Children`s Defense Fund David Einhorn President and Founder, Greenlight Capital, Inc. Julius Gaudio Managing Director, DE Shaw & Co Douglas Haynes Director & Northeast Office Manager, McKinsey & Co. Jeffrey R. Immelt Chairman and CEO, General Electric Co.
Source:
Doug Morris Chairman and CEO, Universal Music Group Gwyneth Paltrow Actress Robert Pittman Partner, Pilot Group, LLC David Puth Head of Investment Research, Securities Finance & Trading Worldwide, State Street Corporation Larry Robbins Founder and CEO, Glenview Capital Management Jes Staley Chief Executive Officer, Investment Bank, J.P. Morgan Chase Barry Sternlicht Chairman and CEO, Starwood Capital Group Max Stone Managing Director, DE Shaw & Co John Sykes Media Executive Harvey Weinstein Co-Chairman, The Weinstein Company Brian Williams Anchor & Managing Editor, NBC Nightly News Jeff Zucker President and Chief Executive Officer, NBC/Universal
Company website, www.robinhood.org, accessed February 2010.
16 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
IRS form 990 and company documents.
Fund Balance
Liabilities Accounts Payable Grants Payable Deferred Revenue Total Liabilities
Source:
$65,526,369
87,076,346 2,149,668 4,936,011 94,162,025
$242,816,450
2,609,719 42,823,889 270,040 45,703,648
$98,055,932 0 12,806,977 500,000 100,000 165,326,965 619,016 11,111,208 288,520,098
2006
Robin Hood Balance Sheet, 2006–2009
Assets Cash and Equivalent Accounts Receivable Pledges & Grants Receivable Receivable/Other Investment/Securities Investment/Other Fixed Assets Other Total Assets
Exhibit 2b
Net Gain/Loss
Expenses Program Services Administration Other Total Expenses
$133,415,265 32,039,366 (5,766,237) 159,688,394
2006
2007
$260,825,496
5,946,141 78,907,931 980,608 85,834,680
$119,783,708 0 31,124,070 0 0 145,240,201 1,787,266 48,724,931 346,660,176
2007
$18,009,046
147,224,063 2,524,718 6,465,454 156,214,235
$153,760,584 26,042,036 (5,579,339) 174,223,281
Robin Hood Foundation Revenue and Expenses, 2006–2009
Revenue Contributions Investments Special Events Total Revenue
Exhibit 2a
$235,745,510
10,613,667 72,337,937 191,255 83,142,859
$131,059,903 0 36,309,213 0 0 114,143,328 4,633,161 32,742,764 318,888,369
2008
$17,098,448
123,661,660 2,517,281 7,304,310 133,483,251
$147,428,689 11,344,629 (8,191,619) 150,581,699
2008
$273,395,527
4,352,561 80,365,445 92,087 84,810,093
$119,162,394 0 51,332,371 0 0 113,155,208 5,107,696 69,447,952 358,205,621
2009
$22,235,625
114,749,471 2,662,971 7,737,187 125,149,629
$148,202,733 5,688,440 (6,505,919) 147,385,254
2009
310-031
-17-
310-031
The Robin Hood Foundation
Exhibit 3
Source:
NYC Boroughs, 2010 and Poverty Map of NYC, 2002
Professor Andrew A. Beveridge, www.socialexplorer.com, from census data and boundaries, U.S. Census 2000.
18 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
Exhibit 4
310-031
Benefit-Cost Ratio, Sample Calculation
Bob’s Jobs is a fictitious job-training organization funded by Robin Hood. Basic data (for a given time period): • • • • •
150 female trainees: 100 of whom were unemployed prior to training and 50 of whom were working prior to training. Of the 150 trainees, 75 graduate from the training program; three drop out; 41 keep jobs for 3 to 12 months (short-term employed); and 31 keep jobs for more than 1 year (long-term employed). Average number of children per trainee: set to 1.8 Intergenerational Income Boost: from literature, for every $10,000 increase in parental income, future incomes of children rise by 3.6 percent Robin Hood Factor: Percentage of grantee’s successes that would disappear if grant were withdrawn. Set to 50 percent. (The percentage of Bob’s Jobs total costs covered by Robin Hood is around 60 percent, but staff judged our importance to be slightly less, at 50 percent.)
Symbols: • Y1i = pre-training (counterfactual) income for the ith trainee, i = 1 . . . 150. Y2i = post-training earnings for the 72 trainees who graduate and work (41 short-term and 31 long-term). • TEBi = earnings boost for ith trainee = Y2i – Y1i. • TEB = total trainee earnings boost (sum of TEBi)
Step 1: Trainee Earnings Boost (TEB). $2.6 million For the three dropouts, TEBi = 0. For the 41 short-term employed, TEBi = the one-shot rise in income over the three to 12 months that they remained employed. Set equal to $3,000. (No discounting is needed for changes that last for only a few months.) For the 31 long-term employed, TEBi = present discounted value of (Y2i - Y1i) over the career of the ith trainee. Assume 3.5 percent discount factor; earnings rise by 1.5 percent a year; 30-year career (which allows for years of underemployment and unemployment). Set the average TEBi for the 31 long-term employed at $4,500/year, or about $80,000 over their careers. TEB = total trainee earnings boost = sum of TEBi for the 75 graduates. For the three dropouts, sum of TEBi = 0. For the 41 short-term employed, sum of TEBi = 41 * $3,000 = $120,000. For the 31 long-term employed, sum of TEBi = 31 * $80,000 = $2.5 million Thus, TEB = total of TEBi for 72 trainees = $0 + $120,000 + $2,500,000 = $2.6 million
Step 2: Intergenerational Income Boost. $365,000 From the literature, for every $10,000 increase in parental income, future incomes of children rise by 3.6 percent. Assume: (a) half of the children drop out of high school, earning an average of $16,000.
19 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
(b) 22 percent graduate high school and do not proceed to college, earning an average of $22,500; (c) 17 percent attend college but don’t graduate, earning an average of $27,000; and (d) 4 percent earn associate’s degree (AA degree), earning an average of $35,000; and 7 percent earn a bachelor’s degree (BA), earning an average of $55,000. Average income = (0.5 * $16,000) + (0.22 * $22,500) + (0.17 * $27,000) + (0.04 * $35,000) + (0.07 * $55,000) = $23,000/year. P.D.V. = $400,000. Assume, had their parents not entered the Bob’s Jobs program, the children of trainees would have earned: $23,000/year. P.D.V. = $400,000. And P.D.V. of 3.6 percent of average projected income = $14,500. Thus, children’s incomes will rise: [(31) * ($4,500 average earnings boost for long-term employed) * (1.8 * children/trainee) * (1/$10,000) * ($14,500)] = $364,095. Step 3: Total Income Boost. $2.9 million From above, Total Income Boost = Total trainee earnings boost plus intergenerational income boost = $2,600,000 + $365,000 = $2.9 million. Step 4: Robin Hood Benefits. $1.45 million Robin Hood Benefits = Total Income Boost * Robin Hood Factor = $2,900,000 * 0.5 = $1.45 million. Step 5: Benefit-Cost Ratio. 7 : 1 Robin Hood Cost = grant = $200,000 Benefit-Cost Ratio = Robin Hood Benefits / Robin Hood Cost = $1,450,000/$200,000 = 7 : 1 Thus, for every dollar spent by Robin Hood, the earnings of poor individuals rise by $7. Note: 7 : 1 does not mean that the earnings of poor individuals rise by an average of $7. This ratio captures the impact per dollar spent by Robin Hood, not the impact per trainee.
Source:
M. Weinstein (with Cynthia Esposito Lamy), “Measuring Success: How Robin Hood Estimates the Impact of Grants,” New York, NY: Robin Hood Foundation, 2009.
20 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
The Robin Hood Foundation
Exhibit 5
Job Training Benefit-Cost Rankings, 2009
Rank
Program
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z AA BB CC DD EE FF GG HH II JJ KK LL MM NN OO PP QQ RR SS TT UU VV
Source:
310-031
Type of Program
Demographics
Sector (trade) Sector (transportation) Sector (technology) Sector (health) Placement-only Sector (health) Sector (health) Sector (health) Sector (health) Sector (technology) Sector (health) Sector (transportation) Sector Sector (trade) Placement-only Sector (health) Sector (health) Sector (trade) Sector (health, technology) Sector (health) General Sector (technology) Placement-only Placement-only Sector (trade) Sector (health) Sector (health) Sector (health) Sector (technology) Placement-only Sector (health) Placement-only Placement-only General Residential Residential General Sector (health) Placement-only Sector (environmental) Sector (technology) Residential Residential Placement-only Placement-only Placement-only Sector (health) Residential
Immigrants Some ex-offenders Immigrants Immigrants, women Ex-offenders, some youth Immigrants, women All Immigrants, women Immigrants, women Some ex-offenders Immigrants, women All Immigrants, women Women All Immigrants, women Immigrants, women Some ex-offenders Youth Immigrants, women Youth Immigrants Some ex-offenders Immigrants Women Immigrants, women All Some ex-offenders Youth Ex-offenders Youth Immigrants Immigrants Some ex-offenders Ex-offenders Ex-offenders Youth All All Some ex-offenders Some youth Ex-offenders Ex-offenders, women All Ex-offenders Immigrants Immigrants Women
B/C Rounded 2009
B/C Rounded 2010
70:1 60:1 60:1 50:1 45:1 40:1 40:1 40:1 35:1 35:1 35:1 35:1 30:1 30:1 30:1 30:1 25:1 25:1 25:1 20:1 20:1 20:1 20:1 20:1 20:1 20:1 20:1 20:1 20:1 20:1 15:1 15:1 15:1 15:1 10:1 10:1 10:1 10:1 10:1 10:1 10:1 5:1 5:1 5:1 5:1 1:1 1:1 1:1
15:1 20:1 10:1 50:1 20:1 30:1 20:1 10:1 15:1 10:1 5:1 10:1 10:1 5:1 10:1 10:1 15:1 5:1 5:1 NA 10:1 5:1 5:1 15:1 5:1 5:1 30:1 5:1 5:1 5:1 1:1 5:1 45:1 5:1 1:1 NA 10:1 10:1 5:1 5:1 5:1 5:1 1:1 5:1 NA 1:1 5:1 0:1
Company documents.
21 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
Exhibit 6
What Robin Hood Metrics Are and Are Not
What Robin Hood Metrics Are
What Robin Hood Metrics Are Not
A standard for ranking grants, comparing the impact of similar and dissimilar programs.
The only criteria for making grant decisions. Observation and subjective judgments also influence grant decisions.
A means of communicating to grantees how Robin Hood evaluates them.
Report cards on the programs we support. An organization can fulfill its own mission and still come up short on Robin Hood metrics.
The basis for a common vocabulary within Robin Hood, to our donors and in the nonprofit community.
Exact. Neither the data we capture not the formulas we apply are precise.
A tool for achieving transparency. Robin Hood welcomes independent voices to examine, criticize, and help improve our metrics.
Unchanging. With additional research and refined calculations, the metrics system is designed to evolve over time.
A diagnostic tool. What do our highest-scoring grantees have in common? Our lowest?
A replacement for hardworking, sharp-eyed program officers.
A method for assessing Robin Hood. We measure our own impact by the same metrics system used to evaluate grantees: how much poverty-fighting good we do with each dollar we spend.
The universal answer for applying investment principles to charitable giving. Other foundations and grant-making organizations may employ different, but useful standards.
Source:
M. Weinstein (with Cynthia Esposito Lamy), “Measuring Success: How Robin Hood Estimates the Impact of Grants,” New York, NY: Robin Hood Foundation, 2009.
22 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
-23-
Company documents. Source:
Exhibit 7
Organizational Chart—Robin Hood Foundation Management Team and Programs Department
310-031 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.
310-031
The Robin Hood Foundation
Endnotes Cara Buckley, “City Refines Formula to Measure Poverty Rate,” The New York Times, July 14, 2008, http://www.nytimes.com/2008/07/14/nyregion/14poverty.html?_r=1, accessed February 2010. 1
2
Duncan Greenberg, “Most Powerful Billionaire Boards,” Forbes, July 23, 2009.
3
“Manna from Hedging,” Institutional Investor–International, June 1, 2003, via Factiva, accessed November 2009.
4 Andy Serwer, “The Legend of Robin Hood,” Fortune, September 18, 2006, http://money.cnn.com/ magazines/fortune/fortune_archive/2006/09/18/8386204/index.htm, accessed November 2009.
Michael Weinstein (with Cynthia Esposito Lamy), “Measuring Success: How Robin Hood Estimates the Impact of Grants,” New York, NY: Robin Hood Foundation, February 2009, p. 27. 5
H. M. Levin, , C.R. Belfield, P. Muennig, and C.E. Rouse, “The Costs and Benefits of an Excellent Education for America’s Children,” Working Paper, Teachers College, Columbia University, 2006. 6
7 Studies that provided the basis for this figure at Robin Hood included: W.S. Barnett, “Long-term effects on cognitive development and school success,” Early Care and Education for Children in Poverty: Promises, Programs, and Long-Term Results, eds. W.S. Barnett and S.S. Boocock (Albany, NY: State University of New York Press, 1998); F. Campbell and C. Ramey, “Effects of early intervention on intellectual and academic achievement: a follow-up study of children in low-income families,” Child Development 65, (1994): 684–698; A. Reynolds, “Effects of a preschool plus follow-on intervention for children at risk,” Developmental Psychology 30, no. 6 (1994a): 787– 804; L. Schweinhart, J. Monty, Z. Xiang, W.S. Barnett, C. Belfield, and M. Nores, “Lifetime Effects: The High/Scope Perry Preschool Study Through Age 40,” Monographs of the High/Scope Educational Research Foundation Number 14 (Ypsilanti, MI: High/Scope Press, 2005). 8 Studies Robin Hood drew from for this assessment included the following: D.P. Culhane, S. Metreaux, and T. Hadley, “The Impact of Supportive Housing for Homeless People with Severe Mental Illness on the Utilization of the Public Health, Correcting, and Emergency Shelter Systems: The New York-New York Initiative” (Washington, D.C.: Fannie Mae Foundation, 2002); T.E. Martinez, and M. Burt, “Impact of permanent supportive housing on the use of acute care health services by homeless adults,” Psychiatric Services: A Journal of the American Psychiatric Association, 57, no. 7 (2006): 992–999; L. Sadowski, R. Kee, T. VanderWeele, and D. Buchanan, “Effect of a housing and case management program on emergency department visits and hospitalizations among chronically ill homeless adults: A randomized trial,” Journal of the American Medical Association 301, no. 17 (2009): 1771–1777. 9 Michael Weinstein (with Cynthia Esposito Lamy), “Measuring Success: How Robin Hood Estimates the Impact of Grants,” New York, NY: Robin Hood Foundation, February 2009, p. 42.
24 This document is authorized for use only by Ashvin Nihalani ([email protected]). Copying or posting is an infringement of copyright. Please contact [email protected] or 800-988-0886 for additional copies.