1. Home
  2. Blog
  3. How learning Bursts Support Attention and Memory
Arrow

Prove and Improve Your Training Program with a Pilot Project

One of the concerns companies have when rolling out a high-value training project is that they’re investing a good portion of the budget into a program that hasn’t been tested. That’s where a pilot project comes in. By running a small-scale test of the larger project, you get valuable learner feedback to help you make revisions and feel confident in the full rollout.

But many people aren’t sure where to start with a pilot and when it should be part of the process. Who should be involved? What should you be testing? How do you get feedback and turn that into the right revisions?

"You may have designed what you believe is the best possible experience in the world, but watching real people interact with your site is often a humbling experience. Because you are not your user." -- Peep Laja, ConversionXL founderA pilot project, also known as usability testing, is one of those things that already overburdened departments feel they don’t have time or resources for. However, particularly for large projects, doing a pilot is critical to getting the results you want.

As ConversionXL founder Peep Laja says, “You may have designed what you believe is the best possible experience in the world, but watching real people interact with your site is often a humbling experience. Because you are not your user.” If you don’t spend the time understanding how learners will respond to your course, you could spend a lot of time and money on a metaphorical paperweight.

High-Value Training through a Pilot Project

These six steps will start you on the path toward effective usability testing.

  1. Identify your goals for the pilot.

You should already know your goals for the pilot program, when you began your planning with the end in mind. Now you need to understand what you want to know from the usability testing. This sample exercise from Becky White of Mutual Mobile, while written for software design, is a great way to focus your questions. Gather your team and start brainstorming specific questions such as:

  • Do learners navigate the course correctly?
  • Are learners using this feature (action planner, pop-up tip, OTJ exercise)?
  • Does the language feel natural to the learners?
  • Did they learn what you thought they should learn?

Then group similar questions together, and distill them into a short set of questions you need to answer with your pilot. These questions will help guide you as you decide what activities or pieces of the course need to be evaluated.

  1. Decide what to measure.

Because the point of a pilot program is to get information and feedback to allow for revisions prior to a full rollout, your program will not be fully finished at the time you do the pilot. But you do need to select pieces of the course that will allow you to answer the questions that came up when you identified the pilot goals.

If one of your goals is to test navigation, you know you’ll need to provide several consecutive modules to see if learners navigate within and between modules. If you need to evaluate language, a gamification piece may not be as critical to test as a video, audio or text-heavy portion of the course. Essentially, by setting goals, you will understand what the key portions of your training are. Test those.

  1. Recruit testers.

By going through the empathetic learner profile process, you should have a good handle on who your audience is and what they care about, in a way that surpasses demographics. Use those profiles to build a target audience who will participate in the pilot. Often your stakeholders and project champions will also be able to help you identify the right people, particularly if you need to test this among multiple learner segments.

A pilot test group doesn’t have to be large, but if your final audience is diverse, you need to mirror that diversity in the test audience. Make sure different departments and job roles are represented in large enough numbers to give you enough data to work with. Depending on how large and how homogenous your audience is, you may want to use anywhere between 2 to 10% of your final audience size.

  1. Collect metrics.

"If you plan on gathering quantitative data, make sure you collect qualitative data so you have a system of checks-and-balances, otherwise you run the risk of numbers fetishism." -- Jerry Cao, UX content strategist at UXPinThese will also vary depending on what your goals are. Carefully look at your goals to understand what mix of qualitative and quantitative data you need. Then decide how you will collect that data. It could be physically watching users take the course to a Likert scale, or any mix of multiple methods. Usability.gov includes some helpful examples of quantitative and qualitative data collection methods.

One caution: “If you plan on gathering quantitative data, make sure you collect qualitative data so you have a system of checks-and-balances, otherwise you run the risk of numbers fetishism,” from Jerry Cao, UX content strategist at UXPin.

  1. Use the pilot data to improve your training program and communication.

There is no point in doing usability testing if you don’t actually use the data. It’s great if you found out learners were using the course exactly as intended and learned exactly what they should have. But odds are it didn’t work out quite that well.

Take what you discovered and make tweaks. If users struggled with navigation, take a hard look at colors, icons, placement and other things that would affect how people use your course. 

Did they learn what you thought they should learn?

If they didn’t take advantage of an action planner, your interviews should tell you if it was a graphic design issue (they didn’t see it) or an instructional design issue (they didn’t think it would be useful).

Then learn from all of this and use this information to communicate during the full training launch. If you learned this solved an unexpected problem, make sure to highlight that. If, in interviews, your pilot group told you they usually think training is pointless, others probably do too. Understand why, and overcome those objections in your launch communication.

  1. Turn testers into evangelists.

Once you have worked with your test group, you’ll have a better idea for who found it valuable or who appreciated your honest desire to improve. Take those people who are interested in what you’re doing and ask them to share their experiences with others.

This could take the form of a number of tactics. These evangelists might send emails to their department, present on the program at a team meeting or let you use quotes from them in rollout videos.

Using these six steps will help you get as much benefit out of a pilot program as possible. If you take the time to plan well, work through the process, honestly evaluate the data and make changes, you dramatically increase your chance of success when the full program makes its debut.

Now that you’ve proved (and improved) your project with a pilot test, we’ll talk next time about the best way to roll out the full project.

Leave a Reply

Your email address will not be published. Required fields are marked *