Skip to content Skip to footer

To reduce bias in assessment, use job-relevant examples in your coding tasks

Here at CodeSignal, we’ve seen first-hand the value of skill assessments in reducing bias in technical hiring. Because skill assessments allow candidates to show what they can do, hiring managers who use tools like CodeSignal no longer need to rely on proxies, like where a candidate went to school, to identify top talent. This has allowed our customers (and, of course, our own hiring teams) to consider a more diverse pool of applicants—rather than limiting their search to candidates from the most prestigious schools or the biggest-name past employers.

However, just using assessments isn’t enough to minimize bias in hiring. You also have to make sure you’re using an assessment that’s unbiased. This article, the first in a new series on designing effective technical assessments, dives deeper into how to create effective coding tasks using examples that are job-relevant and unbiased. 

What makes for an effective coding task?

Let’s start by defining some key terms.

Coding tasks, of course, are the building blocks of your assessment. Technical assessments often consist of 2-3 coding tasks and, in some cases, a series of quiz questions.

Job-relevant means that the content of the assessment is clearly connected to the responsibilities and requirements of the job. Job-relevance is also a foundation of valid hiring assessments. This means that the exact definition of “job-relevant” will vary by industry and by company. Candidates for a mobile developer position, for example, may be asked to build a mobile app using React during a technical interview; similarly, candidates for a senior engineering manager position in the financial industry may be expected to know something about how the stock market works.

Role-specific tasks use examples that the candidate might encounter in the role they’re applying for. Role-specific tasks, most commonly used for assessing senior-level candidates, go beyond the standard of job relevance to provide candidates a simulation-like experience of the job they’re applying for. They offer hiring teams a valuable glimpse of how a candidate might perform on the job.

Take, for instance, the ubiquitous FizzBuzz coding challenge (a common solution in Python is shown below).

for fizzbuzz in range(100):
 
    if fizzbuzz % 15 == 0:
        print("FizzBuzz")                                        
        continue
 
    elif fizzbuzz % 3 == 0:    
        print("Fizz")                                        
        continue
 
    elif fizzbuzz % 5 == 0:        
        print("Buzz")                                    
        continue
 
    print(fizzbuzz)

This task asks candidates to write code that, given a series of numbers, outputs the text, “fizz,” “buzz,” or “fizzbuzz,” depending on whether a number is a multiple of 3, 5, or both. Is FizzBuzz job-relevant? Most likely, yes; it tests a developer’s ability to problem-solve and produce functional code. However, the FizzBuzz challenge is not role-specific because it doesn’t realistically simulate the work the candidate would be doing on the job.

Lastly, unbiased means that the task does not unfairly advantage one group of people over another, assuming both groups have the job-relevant skills needed to solve the task. A biased task, on the other hand, may give one group a leg up by referring to culturally-specific background knowledge, like the rules of baseball.

Avoiding biased examples

Bias, in the context of hiring practices, occurs when the assessment discriminates on characteristics that are not relevant to job requirements—such as when the content fails to capture important aspects of the job or has nothing to do with the job. This becomes a problem when job-irrelevant content negatively impacts the scores of some groups more than others.

So what does this have to do with coding tasks, and how can these be biased? Coding tasks, even when they test job-relevant skills, often use examples that have nothing to do with the job. Take FizzBuzz—it allows candidates to demonstrate their ability to produce working code. But the example it uses is arbitrary. What company needs a software engineer who can write code that produces “fizzes” and “buzzes?”

Another popular task, often called “buy and sell stock to maximize profit,” uses the example of the stock market. A Stack Overflow user describes the problem like this:

You are given the stock prices for a set of days. Each day, you can either buy one unit of stock, sell any number of stock units you have already bought, or do nothing. What is the maximum profit you can obtain by planning your trading strategy optimally?

While many hiring teams have found this problem useful for assessing developers’ coding skills—unless you’re hiring for a role in the financial industry, the example in this task is probably unrelated to the role. And, depending on the task description you provide to the candidate, the example may be biased. Imagine, for example, being asked to complete this task without having basic familiarity with the stock market. Could you do it?

To avoid using examples that introduce bias into your assessment, ask yourself these questions:

  • What job-relevant knowledge, skills, or abilities does this task assess?
  • What background knowledge does the candidate need to understand this task? 
  • Is the required background knowledge relevant to the job, company, or industry?
  • Lastly, how do different subgroups (along lines of race, gender, national origin, etc.) interpret this task? To address this question, it can be useful to solicit feedback on your coding tasks from a diverse pool of colleagues or engage assessment design experts in developing your technical assessments. 

Conclusion

Designing a valid and fair technical assessment is no small task. Bias can be hard to spot, and we’re often not even aware of the biases we hold. But by developing job-relevant and even role-specific tasks, hiring teams can start to minimize bias in their assessments—crucial to identifying candidates who have the skills needed to succeed on the job. 

Even better is engaging a team of assessment design professionals to help you design and validate assessments. The test design experts at CodeSignal would love to talk with you about designing effective assessments that reduce bias and empower your team to #GoBeyondResumes in technical hiring.