Category Archives: Assessment

7 Steps to Designing or Modifying Objective-Based Activities!!

This past Thursday I was lucky enough to get to present at the NASAGA conference in Baltimore. I’ve been involved with NASAGA since 2004 when I got introduced to this wonderful group of trainers, teachers, and designers. I’d really suggest attending next year’s conference in Seattle if you are wanting an infusion of creativity and inspiration.

This presentation came from seeing the disconnect between activities, learning objectives and evaluation in much of the training that I see. I want to see better matches between what we are teaching and how we are evaluating learner’s performance.

While these are presented as 7 discrete steps, it’s likely that a designer would need to revisit earlier stages to make adjustments as needed.

Determine the Goal

This is the big picture goal. Examples include: Students will develop their reading skills or The team will build stronger relationships. These fall within a continuum that could be goals for 6 or 60 year olds.

Identify Required Prior Knowledge/Analyze the Learners

First the designer must determine where on that continuum the learner lies. Letter Name recognition or college-level comprehension? Are these sales associates just beginning their careers, or are these executives selling at the enterprise level? There are tons of cultural, cognitive, language, developmental factors that you need to be aware of as you design effective learning activities. So who is our audience? And what do they know?

identify Realistic Objectives for the Length of Session and Audience

This is where the designer decides what specific things our participants will be learning or doing. Objectives are performance based and verb-oriented. Some examples of objectives are:

  • Recall and state the 50 state capitals of the USA, or recall and state the features of a Ford 150 pick up truck
  • Compare and contrast the features of two competing products
  • Use proper placement and form while running
  • Create a personal goal statement
  • Improve confidence in ability to perform

Choose Performance Benchmarks

How do we “grade” our learners? Too often I see scores of multiple choice being the only way we measure learning, when in actuality we really want to know that they can perform tasks in the workplace, or that they are building foundational skills. We do need numbers, or qualitative rubrics, or rated rubrics. I’m enough of a pragmatist to know that we need reportable outcomes to funders– whether they’re our state governments, or donor or our managers. Here are some ways that we can describe performance benchmarks:

  • Correctly spell 8 out of 10 words.
  • List at least 5 differences between 2 products
  • Match at customer attributes to the best available product
  • Run a mile in 10 minutes or less
  • Create one software routine that can correctly manage user input

Scale Story and Time to Create a Metaphor

The most effective training takes an authentic real-life performance task and scales to the “classroom” so that the class performance most closely relates to what the learner will need to do out in the world. The prime example of this kind of emulation is a simulation– Military video games, or NASAGA member Chuck Petranek’s drinking game. While a video game can give us amazing immersive graphics, it can be as simple as having a chip represent an alcoholic beverage. Sometimes metaphors can become belabored and unwieldy– and the focus becomes on replacing the relationships. We want to have stories that are simple and support the learning objectives– It certainly can be tempting to provide overly complex stories, but scaling back and focusing on the learning objectives helps focus on the important stuff, not building overly elaborate fancy worlds.

Develop Participant Activities

Activities can be individual, team, face to face, online, merged, games. We have whole arsenals available to us. We can do a better job of convincing our stakeholders to use a variety of activities when we can clearly connect them to learning outcomes.

Select Debrief Techniques

How many times have evaluation at the end of a course devolved into a certificate of completion for the student, and a survey for the instructor? That’s not enough .One of the ways that gamification can really be effective is providing a more compelling feedback loop– badges can be great motivators, and can track varying levels of mastery. Points boards can be used effectively (and carefully) to foster competition. BUT, this is only the beginning. If I had my druthers, we would see more integration on training and feedback into the workplace, or school setting. People often see training outside of their workplace performance, and really, it would be good to tie annual reviews/reflection all of the rest into our training.

When Multiple Choice is Just Not Enough (Anatomy of a Bad Assessment Item)

One of the traditional tasks in instructional design is the creation of “knowledge checks” — standard quiz items usually placed in-context with content. The thought behind these types of assessment items is to provide the learner an opportunity to self-assess in sequence, immediately after information acquisition. A common type of item is multiple choice. Multiple choice items can work if you have a great deal of time and understanding of the material, and you are able to construct items that probe higher levels of reasoning. However, too often the reality is that instructional designers without domain expertise write multiple choice items that are irrelevant to real learning, and in many situations, cause more harm than good. Let’s dissect a multiple choice item and probe a bit deeper on why they can be dangerous, and how to make them better.

The inherent issue with multiple choice items like this is the fact that they only test recall, and, in this case, the recall is requested seconds or minutes after the content is provided. And, to impede the process even further, the learner can’t progress until they answer. In this item, the learner is exposed to wrong responses as well as correct ones. The standard four distractors are offered with radio-style buttons: one correct, and three incorrect. The learner scans the list, and makes their choice by clicking in the radio button and then clicking a “Submit” button to receive a response, such as this one:

In this instance, feedback is displayed immediately because the instructional designer has chosen to allow only one try for the item. On incorrect, a “Sorry, that’s incorrect…” statement appears next to a large red “X”. Visual cues are strong, and in this case the type of visual reinforcement and the placement is critical to how useful the item is for learning. When the learner attempts to recall information supported by this item, an unhelpful visual may appear.

For this to be good instructional design the red “X” should be over (or beside) the wrong selected radio button, and the correct answer is should be highlighted. The learner never has a correct visual to overlay the incorrect visual. The learner leaves with a powerful, incorrect visual, instead of a bolder, corrective one. Proper feedback is critical as well. Sometimes learners are just told that the answer is wrong, without being given the correct feedback. Then they return to the item, knowing that whatever thought process or strategy they used was wrong, and can get stuck, trying to remember what their wrong response was, trying to choose the correct answer. If they have to repeat the process multiple times, they do not come away with a strong sense of knowing the correct answer, they instead feel relief that they finally guessed the right answer and were able to progress. In the example above, the incorrect feedback statement is close to the selection, and the correct choice is highlighted with a feedback confirmation next to it on the screen. Additionally, more feedback may be appropriate adding context.

When creating multiple choice items, ask yourself these questions:

  • Is your goal just to have learners take courses, or are you trying to ensure they learn something?
  • If you need someone to demonstrate mastery of procedures, multiple items may be an inappropriate mechanism for them to demonstrate that they can perform.

If you decide to use multiple choice items to promote learning, here are our recommendations:

  • Provide the correct answer after a wrong response.
  • Supplant the incorrect visual, with a bolder visual of the correct response.
  • Use a tracking system which requires the learner to answer a certain percentage (or all) questions correctly.
  • Provide personalized, meaningful feedback in-place on-screen whenever possible.

We all think knowledge checks are innocent enough — important segues in the content sequence — little “breaks” that let the learner pause and think about what they just consumed. This can be a good thing as long as you make sure you’re putting forth the appropriate test item for both the learner and the business. At the end of the day, you don’t want to waste the learner’s time, and you really don’t want to spend precious resources designing learning experiences that don’t have a demonstrable educational gains.

This post was cowritten by Brandon Carson of the Total Learner Experience.

Don’t be Half-Assessed

Assessment is used to measure learning outcomes, but if you see it only as a testing tool you’re missing half its value. Assessment should not be punitive; it should uncover the learner’s strengths and help identify areas for improvement. Well planned and integrated assessment creates opportunities for learners to reflect on their learning, apply new skills and knowledge, and also enables the institution to recognize the value on the learning intervention. Adapting existing models can take you much further than re-creating the wheel. In this post, we examine some guiding principles for effective assessment.

An excellent example of assessment as a way to support learning goals is Adobe’s Certified Associate practice tests. Learners can take a certification exam without any structured practice, but there is also an unlimited certification prep test package. The practice test closely mimics the structure of the final exam. Both combine multiple choice questions and simulations of the Adobe environment. The multiple choice questions require the learner to demonstrate knowledge of design concepts and the production process. The simulations require the learner to complete a task within the simulated software application interface. Keyboard shortcuts are disabled, but otherwise, learners can use any correct method they choose to complete the task, using menus or panels as appropriate.

Another good example is the Sun (now Oracle) Java certification paths. Each path contains a test prep “kit” that includes preparation recommendations, additional resources, a practice test, and a re-take policy. Each path is designed to prepare the learner to achieve a specific level of certification, and used as a benchmark against industry standards. These certifications are recognized by employers and can advance a person in their career.

Accreditation or certification can be used to validate mastery of a topic, but this is not the only way that assessment can be useful. By seeing assessment an integral part of instruction, we can support the learner’s career development while measuring true performance for the business.

Factors that make an ASSESSMENT tool also a useful INSTRUCTIONAL tool are:

  • Authenticity
    • The learner performs the task in context, not recalling theory, but actually demonstrating competency.
  • Open ended
    • Because the end product is assessed, not the method used to get there, learners are able to use whatever menus or panels they choose.
  • Learning while doing
    • Learners use contextual clues and critical thinking to complete tasks. They may not know how to adjust alpha levels in Photoshop, but they may know to investigate the color panel to find them.
  • Self-reporting
    • Learners can mark questions that they’d like to return to if they have time or opportunity.
  • Cumulative time
    • The test is timed with one master clock, not with individual times for certain sections or items.
  • Feedback
    • Learners receive feedback on each item, with notes about the correct answer.
  • Tracking
    • Performance from one practice test to another is tracked.

Assessment is not something that should only occur in a testing situation, indicating pass/fail rates, but it should be integrated throughout instruction to allow the learner to know how they are doing, so they can learn more effectively. You don’t want your learner to leave half-assessed!

A version of this post originally appeared in The Total Learner Experience in August 2011.

Now is the Time to Do Less of More

This week we struck up a conversation about the general state of eLearning design in the corporate world. We ruminate over our belief that companies should consider doing less of more:

Bwc BRANDON: So I thought this week we would discuss the overwhelming number of requests for “training” that come from business units and stakeholders. It seems like some people think “training solves all the problems.”

I’ve been on the inside and the outside of several corporate learning organizations over the years, and one trend that I’ve seen explode recently is a “factory mentality” designed to “templatize” training. I’ve seen these factories operate almost ’round the clock packaging “rapid eLearning” courses with little regard for formative or summative analysis, or interaction that motivates a learner to participate. It’s really a “page turner” world out there in many instances. Some of this is pure economics — many learning organizations are funded by business units. If a business unit allocates a specific amount of dollars for “training” they expect those dollars to actually be spent, regardless of the necessity or quality of the training.

Why have training if it’s not good? I’ve counseled the organizations I work for/with to do “less of more”. I firmly believe if most training organizations just stopped producing about 40% of what they are doing today — just stopped cold turkey — no one would even notice.

Drj DOLLY: I think one reason that training can seem so irrelevant is because it is so divorced from the day to day activities that the jobs actually require. So, these people who are in sales for instance, talking and corresponding with clients everyday, come to training and spend 4 days listening to someone speak at them. These week-long explorations of Powerpoint slides don’t engage the learner. They just become data dump sessions.

Bwc Yeah, I agree. I personally think sales training should be high-touch, situational, and as contextual as possible. It seems to me that there are two avenues to drive down when producing sales training: basic “transfer of information” about products, services, etc., and scenario-based/role-play simulations that place the participant in authentic situations. At Sun, we leveraged the community for the transfer of information component by providing a user-generated content platform. Sales people ate it up. They could get small chunks of product information or sales techniques from experts in the field and download it to their mobile device. We then had “Sales University” for the mandatory accredited training courses.

Drj And the thing is, if people are just going to be sitting in darkened rooms, why bother going to the expense of shipping them across the country? If it’s just memorization of information, there’s plenty of fairly easy and cheap ways to get that across in an online setting. However, both face to face and online activities can be so much more engaging and rich.

Bwc Agreed. One thing that seems to be missing is an evaluation of actual sales skills done in a formal manner. If we’re training these folks to sell, then we need to really look at each individual and evaluate their readiness to sell. It’s one thing to lecture them, have them role-play, provide feedback, and then send them on their way. Where do we assess their readiness to do their job? Is that for their manager to determine outside of the training? If so, is training’s role just to “provide the foundation”?

Drj And  evaluation doesn’t have to be a test. It can be a demonstration, a portfolio or series of smaller activities built into the curriculum.

 

Bwc Yes, one vendor I worked with provided role-playing scenarios in an online format using computer webcams and FlipCams. Participants would video themselves doing their pitch, and then upload it for the cohort and the facilitator to critique.

Drj In my eyes, that’s a perfectly valid evaluation, as long as the learning objectives were to improve their sales pitch, not to learn the capitals of African countries. The whole point is to have evaluation that supports your objectives and curriculum. Excellent evaluation can be painless and seamless.

This post can also be found at The Total Learner Experience

Creative Effective Training Evaluation

Drj Dolly: Brandon ranted about ineffective evaluations while I was holed up under the 2 feet of snow here. The snow is still here, but my Internet is finally back. His comments about evaluation made me wonder what I consider effective evaluation to look like. Cumulative evaluation should be as authentic as possible. If you want someone to be an effective salesperson, you’d better identify what competencies are required and then have them replicate and practice as best they can in the classroom environment.

Bwc Brandon: Good point. I concur. However, more and more organizations are hit with the high-cost of travel and the inefficiency of removing people away from their jobs for instructor-led training. We need to provide effective strategies for authentic evaluation in an online format.

Drj So I can think of two ways to recreate these situations in the digital world: virtual simulation and/or role-playing. Learners will have different profiles. Does someone who is technically minded need the same practice as someone who is naturally a people person? Both these individuals need to come out with the same skill sets at the end, but perhaps they need different practice. Formative self assessments combined with flexible course sequencing can allow individuals to focus on their areas of real need.

Bwc That just describes a sales training course I completed using Thiagi’s 4-Door Model. It allowed the learner to “self-adjust” the content based on their own self-leveling of knowledge. It does beg the question though: how do you “branch” an evaluation based on different learner profiles? Can you have an effective evaluation instrument via a dynamic system that presents a contextual series of assessment items based on the learner’s individual profiles?

Drj I think that you can have a self-assessment that gives the learner guidance in their own strengths and weaknesses.

Bwc OK, so you’re talking about self-assessment. What about an actual skills certification? One that can affect a learner’s job status, salary, or in the case of compliance or regulatory situations, a learner’s knowledge that could have life or death consequences? Can a dynamic evaluation instrument provide the appropriate assessment of knowledge?

Drj I don’t know what you mean by “dynamic evaluation instrument”.

 

Bwc “Dynamic pooling” is when the system displays content based on a learner’s input at the time of input.

Drj What you are talking about now is less about the assessment/evaluation used and more the consequences related to that assessment or evaluation. The SATs are pretty weighty and they’ve been using a dynamic response for years. A student gets progressively harder questions until incorrect responses are entered. Then a cycle commences where easier and harder questions are given to the student until the logarithm determines what the student’s level of mastery is. I’d say that stakes are high, but the College Board feels comfortable using this dynamic response.

Bwc Right. So I’m saying we need a similar system in corporate learning where appropriate assessment techniques seem to be a missing factor. In eLearning, there is a cycle of ineffective self-check systems instead of situational problem-based assessments. For example, a colleague was discussing with me the usage of “misconception” problems in assessment. They can present more authentic situations such as posing the problem in a scenario and requiring the learner to identify the parts of the scenario that are wrong or inaccurate. Even using open-ended question types where the system evaluates based on a keyword or series of keywords can be quite effective. How are we truly able to measure whether knowledge or skills are transferred if we’re not willing to properly evaluate?

Drj I like that because some people are just natural or trained to be good test takers. That kind of exercise actually accurately tests knowledge that they have, rather than their ability to suss out a poorly written stem or item. I know you are all about performance. Shouldn’t people’s training be based upon actual deficits in their performance and knowledge? And if they are high performers or can demonstrate mastery of course materials, shouldn’t they be rewarded by being allowed to choose their own training regimen, anyhow?

 

This post first appeared in January 2010 at The Total Learner Experience