What is Reliability?

Reliability is a statistical measurement that determines whether a tool used (the Leadership Quality constructs within the assessment exercises) measures an outcome (scores) consistently, to a degree that demonstrates that the same results would continue to be replicated every subsequent use.

Internal reliability is used to assess the internal consistency of a scale.  It is widely used within psychometrics and research and allows the consistency of behavioural indicators to be measured.

In simple terms – it shows that the constructs within the assessment exercises are consistently measuring performance.

 

What is Validity?

Validity is a statistical measurement that measures the effectiveness of a tool – does it measure what it is supposed to measure, i.e. will performance in the CPG predict future in-role performance of candidates, and therefore is the CPG a suitable tool to use as a basis for informing the promotion decision?

 

Why is it Important to Measure Reliability and Validity?

Measuring reliability is paramount to demonstrate that the assessment exercises are accurately doing what they are supposed to and will continue to produce consistent results.  Reliability must be measured first, prior to validity as if a measure isn’t reliable, it cannot be valid, although the converse is not true.

Measuring validity is important to demonstrate that the constructs within the assessments are an effective method of measuring specific behaviours and will hold a correlation with future in-role performance and therefore predict suitability for promotion.

Ensuring our CPG is statistically reliable and valid shows that it utilises a rigorous process that significantly reduces the margin of human error that could lead to mis-marking and the requirement of an appeal.  This provides our clients with confidence in the results and the interpretation of the results with regard to candidate performance, development and suitability for promotion.

 

What do we do to Ensure the Reliability of our CPG?

  • Roleplay exercises – For Roleplay exercises it is industry standard to verify reliability using benchmarking between assessors
  • Case study exercises – 10% of all Case Studies are randomly selected and double-blind marked to ensure that scoring is awarded consistently
  • All exercises – any exercise that has a score borderline to the 63% pass mark, is reviewed by the Centre Manager, to ensure that all evidence has been taken into consideration

 

What do we do to Promise the Continuing Reliability and Validity of our CPG?

With a dedication to excellence and best practice, VCA follows and meets the recommendations in correspondence with the British Psychological Society in the execution of validity and reliability testing for Assessment and Development Centres.

Anonymous score data is collated and passed to an independent, external Psychologist to conduct reliability analysis on the cohorts of data.  VCA undergoes this rigorous process on an annual basis to review with the continual addition of new data.  Any feedback from this process is then incorporated into exercise design to ensure a firm dedication to industry best practice and continual service improvement.  Click here to read the CPG Reliability Analysis Report.

In alignment with recommendations from the BPS with regard to validity testing, data is currently being collated, and validity analysis will be conducted once the minimum recommended sample size (100 candidates) has been reached.  Thereafter, the validity analysis will be renewed annually incorporating all new, available, data to inform future design. We will make our results publicly available on our website.

 

Working Together with our Clients to Promote Best Practice

To help us achieve our objective and dedication to best practice, we ask for our clients’ assistance, by passing out our Acceptability Study to candidates, post-process, to feedback on candidate experience and perceptions in addition to assessing acceptability.  We also ask our clients to complete an Evaluation Questionnaire to gather quantitative data about client experience.  In order to be able to feed into the validity process we also follow up clients on an annual basis post-assessment to gather data to inform correlations between assessment performance and in-role performance.

Without the invaluable, ongoing help of all of our clients, we wouldn’t be able to pursue this pathway to excellence – so thank you for all of your time and effort in helping us with this process!