Skip to content Skip to footer

An IO psychologist’s guide to maintaining an effective technical assessment

Creating a technical assessment that produces a reliable signal of skill is no easy feat. It’s also challenging to ensure that your assessment continues to provide a strong signal over time—even after thousands of candidates have taken it. In this article, Industrial-Organizational Psychologist Frank Mu, PhD, of CodeSignal’s Talent Science team, offers concrete, research-backed strategies for developing assessments that will be effective long-term.

After your talent and engineering teams spend hours designing and validating a technical skills assessment, it’s a shame to find those same coding interview questions leaked online. Online coding assessments and technical screening interviews can be highly effective tools for hiring software engineers and other technical roles at scale. That’s only the case, however, if a company can successfully maintain an assessment over time.

Without active management, technical skills assessments are susceptible to leaks and plagiarism. This is on top of other risks, such as an assessment being too easy, too hard, or generally frustrating for candidates. Ultimately, a poorly maintained assessment might no longer be able to provide a useful signal to an employer trying to make a hiring decision. In this post, we’ll discuss three strategies for mitigating these challenges and sustaining the integrity of technical skills assessments.

1. Create question variations

An approach that may be familiar from standardized testing, creating question variations is a powerful way to minimize the impact of content leaks. With variations, two candidates are shown different content but they still need to demonstrate the same underlying knowledge or skills to tackle problems of comparable difficulty. For example, a math question could be rewritten to use different numbers or order of operations. Similarly, a data structure question about manipulating string arrays could be rewritten to be about manipulating numeric arrays instead.

Ensuring consistency across variations is a resource-intensive process, requiring considerable time and expertise. Companies often create assessment specifications or turn to validated frameworks to design questions of an appropriate difficulty that directly map to designated skills for a role. At a minimum, specifications should include the assessment length, required skills/knowledge, question difficulty, and scoring guidelines. This way, you can promote fairness in the hiring process while still producing enough question variations to reduce plagiarism driven by content leaks .

2. Set a content lifespan and refresh your content

To further reduce the likelihood of leaks, companies ought to regularly refresh their questions with appropriate variations such that no content exceeds a predetermined expiration threshold. It’s critical that questions have a finite lifespan since leaks are more probable as more candidates see the same questions. Because the hiring cycle is seasonal, content lifespans should be measured by the number of candidates exposed rather than in units of time. For instance, if a large wave of candidates all encounter the same question during university recruiting, it may already be time to retire the question.

Expired content needs to be replaced by appropriate variations to ensure that the assessment still functions as intended and doesn’t deviate in difficulty. Therefore, you should also factor in how long it takes to create new question variations when setting the cadence of content refreshes. Since good assessment development is time-consuming and demands expertise, some companies employ full-time content developers to scale and maintain their in-house assessments. Others partner with vendors who specialize in keeping technical skills assessments fresh and up to date.

3. Develop a monitoring plan

One of the most important processes for maintaining assessments is also one of the simplest: Monitor the state of your assessments so that you can address concerns. Using question variations and periodically refreshing content is great, but those tactics will do little to solve structural issues with your content, such as assessments being too hard or candidates perceiving assessment questions as irrelevant to the target role.

To devise a monitoring plan, start by answering the following questions:

  • What is the expected distribution of candidate scores, and what percentage of candidates are expected to pass?
  • What percentage of candidates do we expect to drop out of the hiring process?
  • How will we detect and evaluate the impact of leaks and plagiarism?
  • How often should content be revised or updated?
  • How can we evaluate and improve the candidate experience?

The answers to these questions will help set the goals you want your technical assessments to reach. As part of your monitoring plan, you will likely periodically review these questions against actual metrics to check if your assessments are meeting the bar. You can configure dashboards for key metrics such as candidate dropout rate, pass rate, or content leaks. Then, if metrics drift in the wrong direction, you can alert your team to proactively revise an assessment. Be mindful of the qualitative aspects of assessment evaluation as well. Improving technical assessments may not always be a matter of numbers; it’s a best practice to holistically review the full candidate experience.

Companies leverage CodeSignal for its actively maintained technical skills assessments

There’s a ton of work that goes into designing and then maintaining high quality technical skills assessments. Consequently, many companies don’t have the resources or expertise to do this in-house and instead choose to partner with vendors like CodeSignal for our robust content library. CodeSignal’s assessments are backed by validated frameworks, so you can be sure you’re testing relevant skills for software engineering, data science, and other technical roles. Moreover, with thousands of variations and new, expert-written questions regularly being added, you can always be confident in the quality of signal provided by CodeSignal’s assessments. To learn more, request a demo today.

About the author

Frank Mu, PhD, is an Assessment Research Manager at CodeSignal, where he conducts research and builds data analytics processes to ensure scientific rigor behind the development, usage, and maintenance of CodeSignal’s Skills Evaluation Frameworks. He received his PhD in Industrial-Organizational Psychology from the University of Waterloo, and is an active member in the Society for Industrial-Organizational Psychology (SIOP), serving as a volunteer on the Membership Analytics Subcommittee.