Some organisations handle the design of assessment exercises internally. This can be difficult to get right without a good level of experience and expertise in-house.

Here are some of the pitfalls of exercises which aren’t professionally designed.

Written exercises

Frequent Mistakes Negative Impacts
The information provided in background documents doesn’t match with what the assessors are instructed to look for You need to provide all the information you would expect to be referenced and used during the exercise. It is not fair to make assumptions about what candidates do and don’t already know as this may vary. In poorly designed exercises, assessor guidelines can include points which the candidate cannot be reasonably expected to have known.
There is insufficient detail provided within the exercise for the candidates to be able to answer in full Assessors measure performance against indicators which do not match the information provided
Tasks are not sufficiently in-depth, varied or carefully constructed Candidates are not steered to provide a wide enough range of behaviours/ performance so assessors are unable to reliably rate some of the constructs (elements of the competencies) which are nevertheless included in the score sheets. this creates an unreliable assessment and potential for invalid results
Insufficient guidance of what good performance should look like This leads to reduced reliability of assessment as assessors may base evaluation of ‘good performance’ on their own opinions, which will vary
Insufficient adaptation of generic Behaviourally Anchored Rating Scales (BARS) The BARS are generic guidelines only. They need to be adapted to match the specific exercise. However, the adaptation should not deviate them from their original meaning, or risk unintentional overlap with other constructs
Constructs are indistinct Assessors end up rating the same behaviour multiple times as it isn’t clear how different constructs can be differentiated
Constructs are included without any steer within the exercise material to encourage the candidate to include this Candidates will fail to provide evidence of a particular construct as it was not apparent from the exercise that this area was relevant. Assessors end up scoring a ‘1’ for no evidence when it was actually not the candidates fault
Indicators are unrealistic/ a score of 4 would require more than candidates are likely to be able to deliver in the time available In order to meet high requirements in one competency, performance on others may suffer, creating skewed results
Unrealistic subject matter or pitched at the wrong level The assessment won’t be accurate if the exercise content isn’t appropriate or if it is too difficult or too easy for the level being assessed

 

Role-plays

Frequent Mistakes Negative Impacts
Too much for the candidate to do The assessors won’t be able to make a decent evaluation if the candidates have to use a scattergun approach to cover everything expected
Too much script for the role-players There isn’t the time for the candidate to contribute so the assessors are left with little to evaluate
Too much to read and assimilate/ too much complexity Candidates can end up reading from their notes to cover all essential information rather than demonstrating their natural behaviour in practice
Constructs overlap There is too much emphasis on one element of the role-play performance e.g. problem solving or interpersonal aspects, at the expense of areas which are less obvious, but still need to be assessed. Less experienced assessors will struggle to differentiate between the constructs so the assessment will be of global performance rather than broken down into distinct areas. This will make results misleading and unreliable
Too many things covered within one indicator This is where adapting the BARS is important because many elements of the constructs do overlap and could confuse the assessors, rendering their assessment less reliable
Too much information in the candidate instructions and assessor guidelines This makes is more difficult for the assessors to separate out the behaviours they are observing and draw conclusion about what they ae indicative of
The content of the indicator does not reflect the essence of the construct This makes the assessors job more difficult and their ratings less likely to properly reflect the performance they witnessed
Lack of detail to the assessor and role-player guidelines Some candidates will try to model some of the positive behaviours by ‘saying the right things’ but in a superficial way. Role-player guidelines need to be robust enough to challenge this in a consistent way, and assessors need the complexity in their scoring criteria for them to be able to see where holes in the performance exist

 

If you would like a free review of your assessment exercises then contact us by email so we can advise on aspects such where they are meeting the correct standards to ensure validity/ reliability and any areas in need of amendment.