This blog post is based on the twenty-fourth episode of the data-driven recruiting podcast hosted by CodeSignal co-founders Sophia Baik and Tigran Sloyan. You can find and listen to this specific episode here or check out the video version embedded below.
Building a high-quality test is a tug of war – on the one hand you want to assess skills in a real way, on the other hand you have to deliver a great candidate experience. It’s a balance that organizations have to strike.
You also have to balance staying EEOC compliant, removing bias, and varying test difficulty too much across test versions.
The three principles we discuss in detail in this episode will help you balance the various needs when designing a job candidate assessment. We will use examples specific to designing a technical skills assessment, but you can apply these principles in other types of assessment easily.
1. Don’t Start from Scratch
Do you ever hand a candidate a blank piece of paper (or a blank coding environment) and say translate this english statement into code? While it might be helpful to see how someone can get started from nothing, chances are it’s not measuring exactly what you’re looking to measure.
For example, if you’re interviewing for a React position, you don’t need to know how high-quality their HTML and CSS is; you need to know how they implement React functions into the code. You’re wasting their time and yours by having them start from scratch.
When designing assessments, you want to keep them between sixty and ninety minutes. This should be enough time for you to clearly understand skill-level while keeping the candidate engaged.
Instead of having a candidate start from scratch, give them some sort of boilerplate to start from. To follow the earlier example, complete all of the HTML and CSS for the React engineer candidate and just have them focus on implementing React into the existing code. You’ll get the same signals and reduce the noise.
2. Keep the Topics Inclusive
When non-professional test designers (employees, hiring managers or recruiters with no formal training) design tests, they end up choosing topics or questions that resonate with themselves. This can immediately introduce bias into the exam because not every candidate taking the assessment will know about that topic. Some examples are questions about music, video games, and sports.
You want to exclude questions that give any advantage to the candidates who have previous knowledge or familiarity in the topic. This might be as small as knowing how a certain game works or the rules of soccer. It’s not only unfair to other candidate groups but it could cause serious compliance issues.
3. Combine Different Question Types
If you’re trying to measure general skills, like holistic coding abilities, having only one type of question might be okay.
But, if you’re trying to hire subject-matter experts, you’d want to create assessments to measure specific skills in that subject. For example, if you’re trying to assess a JavaScript engineer, you don’t want just coding questions. You’ll want to add some code-based multiple-choice questions that ask about what the code is doing to measure their depth of understanding in JavaScript. This enables you to not only assess how they write code, but how well they read and understand it.
Another example for technical assessments is debugging. Your question could feature an error in the code and they have to find and fix it.
Providing an interesting combination not only helps organizations measure skills more accurately, it also creates a more diverse experience for the candidate to keep them engaged.