Choose your own adventure – navigating shared measurement and evaluation in place-based approaches.

In Part 2 of our series on place-based approaches, Dr Jess Dart talks to Jen Riley about her experience with an Australian flagship place-based initiative, Logan Together, as well as exploring the thinking and tools covered in Clear Horizon’s new online course: ‘Evaluating systems change and place-based approaches.’

For more on what place-based approaches are and why they’re here to stay, see part 1 of the series.

We last spoke about the place-based collective impact initiative Logan Together, and how they’d focused on developing a fantastic measurement framework, but less so on other parts of evaluation.

Can you talk more about the difference between shared measurement and evaluation?

Shared measurement is about looking at your community and working out what the underlying problems are. That means consulting with people and reviewing all the public data available to try to identify and understand what’s happening. It’s an early step in collective impact process, and from there you choose a few quantitative indicators to be the ‘beacon on the hill,’ or your high-level goals. For Logan Together, it was about helping 5,000 more kids to thrive by the age of eight. These are population-level goals, and their Shared Measurement Framework really focused on tracking indicators of change at this level. And while you need to set these high-level aims to help mobilise people at the beginning, it’s just not enough in terms of evaluation.

The reality is, it might take 9-12 years to see a real change in these population indicators – they are ambitious and long term. So, what happens after 3 years? You might be doing a great job, but there is no population-change data to back this up.

That’s why you really need to understand your theory of change – what conditions would we need to see in place for this change to occur? What would be changing in the system if we were on track to achieving those population-level changes? What would the phases of change be? What would we expect to see changing at the beginning, middle and end of our intervention? These are critical evaluation questions, which also need to be supported by different types of evaluation at the different stages and levels of maturity of your initiative.

That’s a lot to unpack. Where do you suggest people start?

We’ve developed a practical framework to help people better understand and evaluate place-based initiatives. In our upcoming course, we cover the basics of place-based approaches before moving onto the theory of change, and the tools available for tracking and understanding changes across the different phases.

We’ve also got this neat little device called a ‘concept cube’, which helps you understand the different lenses you may need to apply depending on your community’s context, as well as a set of evaluation questions. It’s part of a mega toolkit that we’ve assembled from the literature and from our own bespoke materials.

How do you select the right tool?

The planning framework helps you work out where you are in terms of the level of maturity of your initiative, as well as identifying the key things that are important to you. We recognise that because every place-based project has a different context, we can’t provide one simple recipe, so it’s more about helping you understand your context, your level of maturity and what lens you need to apply. Once you’ve worked those out, we can help with your evaluation questions, and provide relevant tools to support these.

It sounds like a choose your own adventure book!

That’s exactly what it is! Depending on the community you’re working with and the stage of your intervention, you might need highly complex and sophisticated tools or something much simpler and more accessible. In our own work, we found some gaps and that’s why have developed our own resources, which we want to share with our learners.

Who do you think should join your upcoming course, and what will they walk away with?

I think it will be useful for evaluators who want to learn more about evaluation beyond the traditional program way of working. They will come away with a much better understanding of the differences between evaluating in a program context and evaluating in a systems or place-based context.

For practitioners working in place-based approaches, I’m hoping they’ll be able to identify the questions they need to ask, and be able knock out a basic framework and identify some of the tools to help them collect data and develop a basic plan. They should be able to hit the ground running.

Would this be a good course for funders to attend?

Absolutely. Funders need to understand what sort of outcomes to expect, and also understand what a promising place-based approach looks like, versus one that’s really not working. And the signs for what’s working and what’s not can look very different in this context compared to a traditional program.

What are three reasons for doing this course?

1. If you want something to complement your population level measures and are wondering how you can show you’re on track, we’ve got practical tools and resources to help you identify and demonstrate your impact.

2. It’s a great opportunity for those looking to expand their skills beyond traditional program evaluation, and perhaps like me, challenge themselves to think differently about their approach.

3. Learning and innovation are critical to place-based approaches – and this course can help you embed these in your initiative.

And perhaps the chance to meet and connect with others doing this work and build a network?

Yes! This framework covers what we know so far – but I’m still learning, we’re all still learning. And we have to keep sharing what works and what doesn’t. We all bring different experiences and ideas to the table – this is the chance to innnovate and support each other. We’re in unchartered terriotiry, so this is a bit like finding our way in the dark with a blindfold! What we are offering is a few directions, help in creating a plan and the opportunity to connect with a few buddies.

Clear Horizon Academy’s new 6 week online course, Evaluating systems change and place-based approaches, starts on 4th November 2019. For more information and to enrol, visit the course overview.

Leave a Reply

avatar
  Subscribe  
Notify of

CLEAR HORIZON ACADEMY

  • (+61) 3 9425 7777
  • Email us
  • 129 Chestnut Street, Cremorne, 3121, VIC

Sign up to our Newsletter

The Clear Horizon Academy is a learning initiative of Clear Horizon.

We acknowledge the Traditional Custodians of the lands on which Clear Horizon is located and where we conduct our business. We pay our respects to Elders, past, present and emerging. Clear Horizon is deeply committed to honouring Indigenous peoples’ unique cultural and spiritual relationships to the land, waters and seas; and to transforming evaluation practice to uphold and reflect Indigenous values.
Join Waitlist We will inform you when the product arrives in stock. Please leave your valid email address below.
Jess Dart sq

Dr Jess Dart

CEO & Founder

Inventor of practical methodologies and highly demanded facilitator, Jess navigates complexity with comfort and helps her clients to become clear about their desired outcomes and how to get there.

Recipient of the 2018 AES Award for outstanding contribution to evaluation, Jess has over 25 years experience in evaluating and designing social change initiatives and strategies in Australia and overseas.

In October 2005, Jess founded Clear Horizon Consulting and is now CEO. She is also a board member of the Australasian Evaluation society.

Jess is passionate about developing and designing real world evaluation and strategy for social justice and sustainability. She particularly works with systems change interventions, large scale strategy and social innovation. After completing her PhD she co-authored the Most Significant Change (MSC) guide alongside Dr Rick Davies, which is now translated into 12 different languages.

MSC is well suited to evaluation in the complex emergent context. The latest innovation by Jess, Collaborative Outcomes Reporting (COR), is a collaborative form of impact evaluation.

Jess is also an active mum and has two teenage boys. In a quiet moment, she loves reading far-future science fiction and enjoys long distance running.

Subscribe here for updates

Please provide your information
before downloading pdf file