We learn more from our mistakes – yeah right!
Back in my school days, I remember my teacher telling us, “We learn more from making mistakes than not.” Yet, in reality, this very statement is the polar opposite of the way we deploy most learning activities.
Think back to a typical learning experience. How was competency assessed? And, did you have the opportunity to learn from your incorrect answers? Learning online is especially an isolated experience and the assessment or test at the end frequently ignores the fact that “we learn more from our mistakes”.
When a learner receives a score of 80%, what does that really mean? Yes, they managed to get 8 questions out of 10 correct. What about the 2 questions they got wrong? This is where most assessment systems breakdown. There is insufficient effort put into following up on the gaps in knowledge when the final score reaches an arbitrary threshold.
Learning and development and training professionals understand that creating a relaxed and open environment promotes the perfect learning conditions. My question is why we deliberately make assessment situations do-or-die-like events? I understand the importance of running assessments in a formal and quiet manner. But it is an artificial environment that is created, so can we really expect our learners to perform to their potential?
Approach and Challenge
I believe there is merit in assessing competency through a series of questions online if it makes up one part of the overall assessment. The other part should include observations of real-world performance so we can be confident that the knowledge is applied correctly outside of the learning environment.
In my opinion, any assessments – especially in an isolated online environment – should promote and allow for curiosity. For example, take this multiple choice as an example:
What colour is the sky during a cloudless day?
If the learner chooses the answer blue, they would receive a point and normally they move on without thinking about it further. Why do we not let them try out other responses, and understand why it is that the others are incorrect? I believe that would promote a deeper understanding of the fact. Naturally, this only works well when there is constructive feedback in place for each response.
One of our recent projects involved working with Auckland Live (formally, The Edge) on an online module based on accessibility needs. The instruction design of the learning and assessment content is built on everything we understand is best practice as learning and development professionals – it does help that the subject matter expert is also an advocate! We designed the assessment at the end of the learning module that included individualised feedback for every multiple choice response – both for right and wrong choices. We also encouraged learners to deliberately try out other responses so the feedback helps reinforce why the correct answer is indeed right.
At your next opportunity, I challenge you to design learning and assessment activities that encourage curiosity because we do indeed learn more from mistakes!