Developing Effort/Impact Ratings for Continuous Improvement

The Tool: Effort/Impact ratings 

The purpose: To prioritize high-impact changes for improvement to e-learning or instructor-led training

Use Case: 

If you have created recommendations for continuous improvement of an Instructor-led or e-learning product, you may have a list of requests - everything from “the graph did not make sense” to “we were unclear about the purpose of the icebreaker activity. 

You could pick suggestions at random to implement, but that will not demonstrate value to your SMEs and stakeholders. If you use qualitative coding (and maybe a little Chat GPT), you will have developed a list of the most important themes and suggestions to move forward.

But how do you know which ones are feasible? 

Enter: Effort/Impact Ratings 

Instructions:

  1. Use the countif function in excel to rank-order which requests are the most frequent. Do most of your learners say they didn’t understand a graphic? Do the majority of them access your e-learning on a mobile device, and say it’s not compatible? 

  2. Once you have a rank order-of your requests, add two columns to your sheet next to each. Once for “Effort” and one for “Impact.”

  3. Set some parameters - I like to use “High” “Medium” and “Low” for both categories. What I consider “high” “medium” or “low” effort depends on my competing priorities and how many people I would need to contract with. 

    1. If the project is mission-critical for the organization, something might be “low” effort, whereas if it is not high-priority, that same task might be seen as “high” effort. 

    2.  You can using a rating such as “1 hour” “5 hours” “10 hours” or even “$” “$$” “$$$” - it’s simply an estimate on “What would it take to accommodate this request?”

  4. To estimate impact, I also set parameters based on my experience as a designer, assuming I am the one responsible for implementing the suggestion. A request for ‘adding video’ might make sense for an objective related to using a new software, but would make less sense for an objective related to improving team communication skills. 

    1. When developing ‘impact’ ratings, you can consider ‘how many learners will this impact’ as well as “what is the impact of making this change on the organization overall?”

    2. Another way to phrase this question: “Is there evidence that points to this change as critical to the organization? 

  5. After you have rated each change based on effort and impact, you can draw your focus to those high impact, lower-effort changes that will really make your ILT, VILT, or e-learning shine. 

  6. Present these to your stakeholders as “potential changes in X product” and give a timetable for implementation. 

If you are lucky enough to have a team to develop these ratings, you can have each stakeholder independently rank the suggestions, and then compare. This takes more time, but also creates more buy-in for change. 

If you want to make this more visual, this resource from Six Sigma shows how to make effort-impact into a matrix.

The Pros: 

Developing an effort-impact rating helps your SMEs and Stakeholders easily gauge which requests are feasible. Everyone may agree they want videos added to a module, until they consider the potential cost. Stakeholders might think it's easy to “simply change the layout” until they see how many billable hours that will rack up, and which projects will be re-prioritized by that request.

The Cons: 

Both effort and impact ratings are highly subjective. To develop them as an Instructional Designer, you will need to rely on you own experience, and your knowledge of organizational priorities. If these are not clear, you will need to circle back to your stakeholders for clarity. 

The Takeaways: 

 Effort-impact ratings are one way to put requests for learning changes into a context that your SMEs and Stakeholders understand. It can help you scope and prioritize updates and demonstrate the added value your work brings to an organization.

Have you used something similar in your work? Comment below to share!

Next
Next

Summarizing data for continuous improvement (with a little help from AI)