Some of my favorite problems to tackle are those of measurement. For instance, how do we think about social norms, but in a quantitative way? And, is it even possible to create a universal (or even multiple-country) indicator for women’s empowerment (been thinking about this one A LOT lately). Over the past year, I’ve also spent a lot of time thinking about how to measure skills, and specifically, how USAID should measure changes in skills as the causal effect of its workforce development programs all over the world.
Turns out, this question is not even remotely trivial, but I’m very excited to share some recently released guidance on exactly how USAID plans to incorporate some of the latest and greatest advances in measuring many (but not all) types of skills in its youth programming around the world. This is officially joint work with Catherine Honeyman of World Learning (a delightful collaborator if I do say so myself), but also benefited enormously from a wide consultative process with stakeholders across USAID in DC and around the world as well as implementing partners and peer research organizations working on issues of skills development for youth.
As background, USAID works with youth programming in 60 countries and workforce development programs in 30 countries. All of these programs are working to impart technical, vocational, social-emotional, digital, reading, and mathematics skills (and more) to various program participants. And all of them (yes ALL of them) are required to report back to USAID on their project in various ways.
Alongside, there has been a ton of work recently on how to measure various skill development. As we learn more about the importance of “skills” to labor market outcomes like perseverance or grit and hard-working or dependable, the more necessary it becomes important to measure them in a way that has meaning across different spaces. After lengthy reading and consultation, we identified two skills or groups of skills that were selected to be “standard foreign assistance indicators”–percent individuals with improved soft skills and improved reading skills–and three that have been designated “supplemental indicators,” percent of individuals with improved math skills, percent with digital literacy skills, and percent passing a context-relevant technical skills assessment. More precise wording is in the table, with a much longer justification in the how-to note itself.
In short, though, it’s really tough to standardize measures across all those countries and all those contexts.
I won’t lie to you; we did not find all the answers. However, I think the note provides an excellent jumping off point for ongoing conversations on how to measure important workforce development programming outcomes and highlights important gaps in our understanding of how to measure these outcomes in a consistent and useful way. Please reach out if you have ideas!
We also put together a fun (measurement is FUN, I SAY!) FAQ that you can read here.