This blog post is based on the eleventh episode of the data-driven recruiting podcast hosted by CodeSignal co-founders Sophia Baik and Tigran Sloyan. You can find and listen to this specific episode here or check out the video version embedded below. Using assessments, especially at the top of your hiring funnel is a key component of making the selection process fair. But, with the introduction of assessments comes the need to validate the results you’re getting while avoiding any adverse impact. So, how do you avoid adverse impact or prove that you don’t have one inherently in the assessment itself? It’s an important question to ask, because companies that are subject to EEOC guidelines and other legal requirements have to be able to defend their results. There is a set of uniform guidelines to follow so ensuring you don’t have an adverse impact has to be a priority. Here’s one of the big components of the process. Reliability. Simply put, it’s the concept of retesting and getting the same result. This might be with the same test, or with an updated version that has new questions. Let’s say your organization sent the assessment to thirty candidates, of which you hired five. If you retested the same thirty people six months later and found dramatically different results, then the test is unreliable. But it’s more than just the assumed reliability; you have to be able to prove it. Without the ability to show that a retest would produce a highly consistent result, you cannot protect your test, or its output. When it comes to regulatory guidelines, being unable to prove the reliability of your assessments can have consequences. You are legally responsible in those cases to ensure there is no adverse impact on a protected class, such as a particular race, sex or ethnic group. And this goes beyond your assessments. It’s an important factor that applies to any part of your hiring process. Validity is another crucial component to the validation process, and it can be more complex to measure. The most common method in this category is the content related validation process. In this scenario, you would start with the job analysis and determine the key requirements of the role. What skills and knowledge are required of a candidate? Next, you have to be able to show that your questions, asked of any applicant, are measuring those specific skills or knowledge base. Determining whether assessment questions are measuring only the key required skills can be tough. And, there’s another method that’s introduced here, called Criterion Related Validation. It requires more time to analyze because it measures if the people who scored well and were hired, actually performed well on the job. Usually, for reliable validation, you’ll need some expert advice. A professional can validate exactly what a particular question is measuring and verify that it is a required skill for the job. These secondary experts also defend your assessments and results, in court, if that situation ever occurs. When it comes to assessments, remember that validation is not as easy as it sounds, and there are strict requirements to follow. Always start with a framework when developing your assessments and use experts when needed. When it’s working correctly, you’ll be hiring qualified candidates, and be able to defend how!