How does MSC compare to traditional monitoring? Copy
MSC is different from the traditional methods of monitoring and evaluation that you might be using. It complements traditional approaches to monitoring and evaluation because it can fill in some of the gaps. In general, MSC is a qualitative method, and traditional monitoring methods are more quantitative in nature. To illustrate just how different MSC is and why it works so well as a supplementary method, select the tabs below to compare:MSC Traditional MSC
- Inductive – about unexpected outcomes
- Diversity of views (from field staff and beneficiaries)
- Open questioning
- Participatory analysis
- Puts events in context – ‘thick description’
- Enables a changing focus on what’s important
- Outer edges of experience
- Deductive – about expected outcomes
- Indicators often determined by senior staff
- Closed or specific questioning
- Analysis by management
- Based on numbers – no context
- About ‘proving’
- Central tendencies
Select the tabs below to find out more about how MSC and Traditional monitoring compare.Inductive vs. deductive views
Traditional methods often use deductive reasoning. Usually, there is a theory about what is supposed to happen, and then we analyse quantitative data to find out if the outcome that we expected occurred. But what about the outcomes we don’t expect? Deductive reasoning isn’t ideal for that. MSC uses inductive reasoning instead. By asking participants to make sense of events after they have happened, MSC can tell us about the outcomes we expect as well as the outcomes that we did not expect. This is useful because it can tell us things that we don’t realise we need to know.
By using the MSC technique to get information and encourage reflection from participants regularly about the intangible and indirect consequences of their work, teams can change direction to achieve more of the outcomes that are important to them.Diverse vs. limited views
In many monitoring and evaluation systems, the indicators we measure are defined by people who are distant from where events happen. When senior executives and specialist research units define indicators for monitoring and evaluation, these are defined by looking outwards from the project (program out). In MSC, this is done differently. The people closest to the event, like field staff, beneficiaries , front line staff and clients are the ones who identify stories that they think are relevant (context in). Other participants then choose the most significant stories, so diversity of views is a core part of the way the organisation decides which direction to go in.Open vs. closed questions
Here are some example of closed questions:
- Did you like the program?
- On a scale of 1-10, how would you rate the program?
Questions like these result in numerical data that can be analysed quantitatively. MSC analyses qualitative data, so it uses open questions. For example:
- Over the past five years, how would you describe your experience of the program
- From your point of view, what was the most significant change that took place concerning the quality of people’s lives?
Using MSC, participants use their judgement to identify and select stories. To do this, they use open questions, like these, which gives beneficiaries , field staff, clients and front line staff a voice in the process.Participatory vs. centralised analysis
Often in traditional methods of monitoring and evaluation, data is analysed at a senior level. Typically, fieldworkers do not analyse the data they collect; they pass the information on for others to analyse.
In the MSC process, information is not managed centrally but is distributed through the organisation and processed locally. Staff collect information about events and evaluate that information according to their local perspective.Context vs. lack of context
Quantitative data is often analysed without context. Tables of statistics are usually sent from field offices to central office staff for analysis, but the people analysing the data are a long way from the field site. With limited text comments from fieldworkers, the analysis happens without much context or the perspective of beneficiaries and staff.
MSC uses ‘thick description’: detailed accounts of events in the local context, with detail about people and their views. These descriptions are usually given through anecdotes or stories that also capture the writer’s interpretation of what’s significant. This makes drivers for change visible. With this additional information, teams can see what happened and why, and can focus on what has changed and why this is important.Static vs. dynamic indicators
In most monitoring and evaluation systems, indicators remain the same for each reporting period. The same questions are asked repeatedly, and the focus doesn’t change. With MSC, the type of data collected is dynamic and changes over time. Participants choose what to report, and these choices reflect real change in the world, and changing views in the organisation about what matters. That information can then inform project activities, ensuring that the project reflects what’s important to the people involved. This makes MSC particularly good for working in emergent and complex contexts.Outer edges vs. central tendencies
MSC focuses on the outer-edges of experience. In most types of social science research, and in evaluation, we are mostly concerned with finding out what most people’s experience is of a program or intervention. This is related to the scientific research approach, where the main focus is on proving or disproving hypotheses. MSC is interested in the outer-edges of experience, rather than in finding out or generalising about the most common experience. This makes MSC useful for investigating the unintended outcomes of programs. This means that MSC is not intended to produce generalisable results.
- Appreciative inquiry
- Success case method – Brinkerhoff
- Critical incident technique