April 6, 2022

Helping skills and training practitioners go “beyond the pilot”

Post-mortem evaluations don’t help practitioners meet their goals or build evidence generation capacity.

Anthony D'Ambrosio
No items found.
No items found.

There are promising skills development programs being implemented in communities across Canada. But how do we help these programs reach more people? That’s where Blueprint comes in.

Today we released Blueprint’s FSC Evidence Summary 2021: Our Insights — a report that outlines our second year of evidence work with the Future Skills Centre (FSC), which you can read here. Developing this report was a great opportunity for Blueprint to reflect on what we’ve learned over the past year as an evidence partner, including insights generated from FSC’s first-ever round of pilots. In the report we share how we worked to overcome common challenges in evaluating new programs to ensure they can thrive “beyond the pilot.”

A big part of what we do at Blueprint is generate evidence about new skills and training programs. From the beginning of our partnership with FSC, we recognized the importance of rethinking conventional approaches to supporting pilots and evaluation. In the process, Blueprint has learned a lot about how we and other evidence partners can set pilots up to succeed in the long-term.  

The challenge: the post-mortem model of pilot evaluation

Pilots are an attractive policy tool for decision-makers: they offer the opportunity to beta-test new solutions and foster innovation through the smart use of public resources.  But too often, pilots — even successful ones — never move beyond this initial testing stage. Rarely are lessons from pilots used to meaningfully expand or improve on a service.  

A big part of this disconnect is the traditional role evaluation has played in publicly funded pilots. Too frequently, evaluation is used as a rigid and point-in-time activity – essentially, as a post-mortem by funders or policymakers to see if the pilot met predetermined key performance indicators. Evidence is gathered by service practitioners, often off the side of their desks, and is later analyzed by evaluators to gauge success.  

This “post-mortem” system of pilot evaluation has a few major problems:

1. Post-mortems don’t help practitioners meet their goals or build evidence generation capacity

Pilot evaluations are often conducted at arms-length from practitioners, with service providers collecting and sharing data with evaluators for analysis ― with minimal back-and-forth with those evaluating. In contrast, working directly with practitioners to collect, interpret, and use evidence throughout a pilot process can support ongoing improvement. This allows evaluation partners to not just passively gauge program achievements, but actively enable program success.  

This kind of collaboration also enhances the capability of practitioners to use data and evidence in their work. For example, the Toronto Region Immigrant Employment Council (TRIEC) requested additional help to support evidence generation for an FSC-funded newcomer employment program. Blueprint worked closely with TRIEC staff early on to identify what internal capacity and strengths they had in place and developed an ongoing, multi-step coaching plan to support their evidence goals. TRIEC was able to combine their own expertise in working with their target population with Blueprint’s expertise in building evidence generation tools that were rigorous and sound. With this targeted support and ongoing collaboration with Blueprint, TRIEC maintained ownership of delivering evidence activities and built new evaluation capacity in the process.

2. Post-mortems don’t help decision-makers measure what really matters

Often, evaluators work with the information they have at the beginning of a pilot to set up the best possible performance indicators and measurement tools, but these assumptions are just that — assumptions. If evaluators don’t have the capacity to be nimble and adjust their approach as the pilot evolves, key learning can be lost. That’s why it’s so important to work with our partners to make sure that the evidence being collected yields insights that are relevant and actionable.  

Blueprint is the evidence partner for EDGE UP, an FSC-funded program delivered by Calgary Economic Development (CED) that is designed to help displaced oil and gas workers train for jobs in Alberta’s growing tech sector. COVID-19 meant that the program had to pivot to online delivery quickly and unexpectedly, an adaptation that was challenging but mostly successful. The pandemic had additional impacts: massive disruptions in the labour market meant that it was difficult for EDGE UP participants to find jobs immediately after program completion. However, Blueprint’s evaluation activities found that the program reached the target audience, had a very high completion rate, and helped participants achieve their learning goals. If program efficacy was measured only by tracking near-term employment outcomes, we would have missed critically valuable insights about the value of the program.  

We also would have missed evidence-informed opportunities for mid-program improvements. For example, we used data on participant experiences in the program from early cohorts to find what was working and where there were pain points – this led to CED adding orientation sessions, restructuring some of the training, and adding a work placement component for participants. Our approach, which included in-depth interviews with participants and staff, meant we could generate insights that were both relevant for near-term adaptation and helped prepare EDGE UP for successful scaling.

Too often, program evaluation marks the de facto endpoint of a pilot: once the results are in, there isn’t a plan in place to use what was learned to grow impact and enable broader systems change.

No items found.
No items found.

3. Post-mortems don’t equip pilots for scaling

Too often, program evaluation marks the de facto endpoint of a pilot: once the results are in, there isn’t a plan in place to use what was learned to grow impact and enable broader systems change. Even when there is institutional support and resources to replicate or expand a promising pilot, major implementation challenges remain. Simply copying-and-pasting a strong pilot into a new context is not going to generate the same outcomes. But often, gathering the evidence that would be needed to help successfully scale a pilot isn’t considered by evaluators. In fact, scaling or preparing for scaling requires gathering multiple types of evidence that aren’t often included in pilot evaluations.  

Our work with the Future Skills Centre on the Scaling Up Skills Development Initiative is designed to tackle this challenge head-on. This initiative is supporting 10 promising pilot models, including EDGE UP, to prepare for broader scaling. Through our involvement we are developing customized evidence plans for each project and providing tailored assistance that aligns with each practitioners’ expertise and capacity. We are working with each model not only to evaluate outcomes and impact, but to optimize design and delivery and gather evidence on factors that impact scaling success, like service demand and value for money. We are excited to continue refining our understanding of the different pathways pilots can take to scale successfully, and working closely with these projects to progress on their journey.

Room to grow: bringing evidence “beyond the pilot”

We learned a lot from evaluating the first round of pilots funded by FSC, and we’ve been able to apply these lessons in our work supporting new evidence partnerships, including a targeted focus on practitioner capacity-building and preparing programs to scale.  

Evidence generation can and should be an ongoing and a continuous part of supporting service practitioners, rather than a report card. This is especially true of pilot programs, where practitioners need as much information as possible to navigate uncharted waters — and was perhaps doubly true in the case of these first FSC pilots, as practitioners were forced to adapt quickly to the unprecedented disruptions of COVID-19.  

In our role as an evidence partner, we are working with service practitioners to use innovation and adaptation across each stage of a pilot’s delivery: expanding capacity in design, getting creative with data collection, and adopting a learning mindset at every stage of implementation.

It’s not that we have this all figured out: helping pilots make it to the next stage of expansion or scaling is a complicated process. But by being open to new ways of doing things, working closely with service delivery partners, and thinking about future possibilities from the early stages of our involvement, we’re helping innovative programs turn evidence into action.

We are excited to keep thinking “beyond the pilot” in the year ahead and gratified to be working with FSC and our partners in the future skills ecosystem to try new ways of building evidence.

This post was orginally published on LinkedIn. Read here.