One of the areas of Social Innovation that has received a considerable amount of attention over the last few years is social impact measurement – so much so it has become a legitimate and profitable business for many. As it has emerged in popularity, the dominant discourse has positioned impact measurement as a necessary and incontrovertible partner to Social Innovation, while proposing a small number of tools (monitoring, evaluation and impact measurement) with which to do it.
As a Social Innovation practitioner that has been in the field for over 15 years, I am concerned about this growing trend. The law of the instrument tells us that when you have a hammer, everything looks like a nail and this is what I see happening all around when it comes to social impact measurement. It is what is being taught to students of social innovation, non-profit management and social entrepreneurship; it is what funders are demanding of their grantees; and it is what investors are asking of new start-ups. But is this the right approach? I am concerned that our unquestioning acceptance of this impact measurement narrative has a huge opportunity cost for true social innovation, and is potentially damaging both to the integrity of our field and the impact that we hope to generate. In writing this article, my invitation to readers is to step back: take a moment to think critically about what the impact measurement narrative is telling us; to reflect on the underlying assumptions, limitations and challenges it may hold; and to ask ourselves what we may be missing by simply accepting this definition. My hope is that this article and the questions it raises can contribute to a broader discourse centered around better understanding impact while helping to shape a set of guidelines and tools that are more aligned with helping Social Innovation deliver on its promise of a better world.
I will start out by acknowledging that where we are now is a significant improvement on where we have come from. We do not have to go too far back to find evidence of the world before impact measurement: a glaring lack of quantitative evaluation in state-funded mega-projects, a private sector only interested in measuring profits, and a nonprofit sector wrestling with a mode of monitoring and evaluation (M&E) rooted in fears of non-compliance and an assumption of development as predictable and linear[1]. Those that have worked to improve the measurement of impact in these kinds of interventions have made significant and vital contributions towards our knowledge and understanding about the nature of social change and social innovation.
My criticism of social impact measurement lies not in these advancements per se, but in how these advancements have come to shape a rather limited understanding of what social impact measurement is and how we should go about it. Many of the concepts and even the language now used in social impact measurement have evolved from monitoring and evaluation (M&E) processes in the non-profit sector – processes based on highly transactional, low-trust relationships that were developed for NGOs to prove and be accountable for funding. If there is any doubt at all about these similarities, take a moment to compare the (much-criticized) Logical Framework that was used for years in development with the steps for Impact Measurement.
While the academic theory underpinning impact measurement is based on aspirational goals about when and how an impact assessment should be done, the reality is that due to resource constraints, project timelines and external drivers these ideal conditions are rarely achieved. Compromises are made and impact assessors do what they can to get the data they need. The result being, in the words of Lorenzo Pelligrini, Associate Professor at the International Institute of Social Studies at Erasmus University, that impact evaluations are given ‘an aura of scientific truth that in most cases is nothing more than an aura’. What happens in practice are evaluations that are linear and time-bound, best suited to a project model of Plan, Implement, Evaluate. They are conducted by outsiders and driven in most cases by questions of performance and compliance, missing valuable opportunities for learning and reflection for those closest to the project. To the stakeholders and beneficiaries, the process itself reflects a lack of trust and can feel extractive, patronizing and colonial to those being evaluated. The drive to account for attribution is counter-intuitive to the bottom-up, multi-stakeholder values of social innovation and can even damage important partnerships and relationships between stakeholders, local actors and beneficiaries. Doug Reeler (2007) summarises these kinds of traditional M&E practices: “Created to help control the flow of resources, these frameworks have come to control almost every aspect of development…subordinating all social processes to the logistics of resource control, infusing a default paradigm of practice closely aligned with conventional business thinking.[2] ”
The influence of conventional business thinking can also be seen in other aspects of the impact measurement narrative. While captured nicely in words like additionality, intentionality, and measurability, in examining the drivers behind this movement we come to understand that the primary purpose of measuring impact is to provide a competitive advantage in order to attract more funding. Now I am not arguing that competition is bad, nor is there anything wrong with trying to attract funding: it is a reality that we all must face. But should this be the primary reason for measuring impact? And if every dollar we spend on measurement is a dollar not spent on impact, how do we appropriately evaluate the cost-benefit?
To answer these questions, it is worth taking a moment to think about how we got here. First of all it is worth acknowledging that impact assessment is big business. Therefore, when we come across statements like this one: ‘at the highest level we must measure impact if we are to consider who we are and what we do to be driven by our desire to affect positive change[3]’, it is important to stop and think about who is crafting this narrative. Is the impact measurement imperative really an imperative? As social innovators and entrepreneurs, are we ready to accept that our intrinsic motivations and drivers are connected to and/or validated by our ability (or inability) to quantify or qualify the impact of our actions? Or is there something we may be missing by blindly following this path?
Paolo Quattrone, an expert in accounting, governance and reporting and Professor at the Alliance Manchester Business School would argue that we are. Quattrone contends that the idea of measuring for transparency can be problematic when dealing with complex interactions – which social innovation inherently does – in two ways. First of all impact measurement ‘pre-supposes what is right’ – a concept that is counter to the claim that social innovation places beneficiaries at the center of a bottom-up, user-driven process. By setting targets and indicators we make assumptions on behalf of the beneficiaries about desired outcomes, an idea that links back to my previous point about the imposing and extractive nature of impact evaluations and the inherent power inequalities that they represent. Secondly, Quatronne’s work in accounting and transparency demonstrates that when we set targets to measure, we unwittingly narrow our focus and create blind spots, while also ignoring that which cannot be measured[4]. Yet in the field of social innovation, it is often these kinds of hard-to-measure impacts such as shifting social norms, strengthening citizenship and democracy, or creating behavioral change that is the long-term impact we seek to create.
What Quattrone advocates for in management practices is ‘a system that should force the exercise of judgment and scrutiny by generating debate and productive tensions around doubt rather than alignment and shared fallacies around certainty[5]’. What Quattrone is asking of us is what Thomas Schwandt[6] refers to as an ‘evaluating mindset’: curious and inquisitive, systematic in our approach, reflective, self-critical and open-minded. This is difficult to do in a transactive system highly focused on the connection between payment and performance. So what is the alternative?
What I propose is a reordering of the impact narrative so that the primary purpose becomes about understanding impact. Understanding impact involves measurement, but also management.
We must prioritize impact management processes that incorporate non-linear models that can reflect the reality of distributed goals and uncertain change pathways. They should embed processes of learning and assessment that are reflective, iterative and regenerative. And they should engage with all stakeholders as active contributors to these processes, generating the kinds of self-reflection and internal learning that enhance social change processes.
Our impact measurement approaches should reflect an investment of resources that is commensurate with the expected return. We cannot align our aspirations to a single gold standard of impact evaluation but must recognize that the design, method, and cost will vary substantially and that there will be times when not measuring impact will make the most sense.
Above all, we must strive to be master craftsmen of our trade. A master craftsman is recognized not just by the wide variety of tools that they employ, but with the skill and dexterity with which they use those tools. As practitioners, we must become familiar with a range of tools for understanding impact, mixing and matching and adapting a combination of frameworks, concepts and methods that are unique to each situation.
Corrina Grace | LinkedIn | Website
I am a socially-driven entrepreneur, systems thinker and sustainability leader with almost 20 years working in the field. I hold a Masters in Social Innovation for Sustainable Development, and bring over a decade of experience as a practitioner living and working at the nexus of environmental degradation and economic poverty in Central America.
I am the coFounder a UNESCO award-winning organization – SERES – where I currently serve as a Board Member and Senior Advisor. An internationally recognized facilitator, I have worked with communities and organizations in Africa, Australia, Europe, North America and Latin America, in a variety of settings.
I bring an engineer’s love for solving problems, a pioneering spirit and an entrepreneurial mindset to everything I do, along with a deep and unwavering commitment to justice and equality for people and the Planet.
Learn more at Social Innovation Academy
This article was written in collaboration with the Social Innovation Academy – the first fully online management training programme focusing exclusively on social innovation. Subscribe to our newsletter, join our private LinkedIn group, become one of our friends or follow us on social media (LinkedIn, Twitter and Facebook). We welcome all requests for collaboration here.
[1] Reeler, D. 2007. A Theory of Social Change and Implications for Practice, Planning, Monitoring and Evaluation. Community Development Resource Association, Cape Town, South Africa.
[2]Reeler, D. 2007. A Theory of Social Change and Implications for Practice, Planning, Monitoring and Evaluation. Community Development Resource Association, Cape Town, South Africa.
[3] “Why Measure Social Impact? 4 Reasons for Change Makers.” n.d. Accessed April 3, 2020. https://www.sopact.com/perspectives/why-measure-social-impact.
[4] Quattrone, Paolo, Cristiano Busco, Robert Scapens, and Elena Giovannoni. n.d. “Dealing With The Unknown: Leading in Uncertain Times by Rethinking the Design of Management Accounting and Reporting Systems.” Chartered Institute of Management Accountants 12 (14). Accessed April 3, 2020. https://www.cimaglobal.com/Documents/Thought_leadership_docs/Academic-research/4466%20Dealing%20with%20the%20Unknown%20Research%20Paper%20STAGE%203.pdf.
[5] Quattrone, Paolo, Cristiano Busco, Robert Scapens, and Elena Giovannoni. n.d. “Dealing With The Unknown: Leading in Uncertain Times by Rethinking the Design of Management Accounting and Reporting Systems.” Chartered Institute of Management Accountants 12 (14). Accessed April 3, 2020. https://www.cimaglobal.com/Documents/Thought_leadership_docs/Academic-research/4466%20Dealing%20with%20the%20Unknown%20Research%20Paper%20STAGE%203.pdf.
[6] Schwandt, Thomas A. “Educating for Intelligent Belief in Evaluation.” American Journal of Evaluation 29, no. 2 (June 2008): 139–50. doi:10.1177/1098214008316889.