More and more tech companies have adopted objective, skills-based technical evaluations in early stages of their hiring process. But not all evaluations are built in the same way. Many companies leave technical question design to their engineers. This can lead to questions that are thought-provoking—but don’t effectively measure crucial candidate skills.
Consistent and accurate evaluations require specific expertise to build well, and here are some important steps engineers might not take:
Conducting a job analysis.
Jobs in tech require a complex blend of hard and soft skills. Specific skills can be hard to pin down for experienced engineers who find the work intuitive. A job analysis is a structured process that measures and analyzes the specific duties and responsibilities of a certain role. Engineers generally aren’t trained to conduct a job analysis or aware that this is an important first step in writing a technical question.
Usually conducted by Industrial-Organizational (IO) Psychologists, a job analysis involves thinking through the knowledge, skills, abilities, and other characteristics (KSAOs) required to excel in a position. Often, it also includes interviews with employees in the same role or team to gather job context. The well-defined list of KSAOs created from a job analysis clarifies what skills candidates need to succeed in a job, and provides a guide for that role’s technical evaluations.
Validating technical questions with skills evaluation experts.
To maximize the actionable information received from a technical evaluation, questions in the evaluation must be job-relevant and role-specific. A job-relevant evaluation is directly connected to the tasks of a job. A role-specific evaluation accurately reflects what work in the role will look like.
Technical questions written by engineers may not meet both criteria. Common software role technical questions like FizzBuzz are job-relevant but not role-specific—they involve programming but not at a realistic level for the job. Question validation, conducted by IO Psychologists and subject matter experts (SMEs), uses business data and evaluation metrics to confirm that a technical question is job-relevant and role-specific. Partnering with a skills-based technical evaluation vendor can make this easier, as the relevant key metrics are often automatically collected as part of the service.
Piloting technical questions with other developers.
While a question might seem clear to the person who wrote it, another reader could find it confusing and hard to approach—even though they have the skills needed to succeed. This might be because of unclear or vague wording. Or, terminology could be the problem, as language used to describe software changes frequently with innovations in the tech industry.
Having other engineers test a technical question confirms if it is consistently interpreted as expected. This might show that the phrasing needs to be adjusted, or more information should be added to guide candidates in the right direction. Piloting a technical question on a group of engineers from diverse backgrounds also ensures that a question is generally understandable.
Conducting adverse impact analysis.
Technical evaluations promote skills-based hiring, but technical questions can still be biased and inaccurately evaluate a candidate’s skills. Bringing in SMEs and IO Psychologists to conduct adverse impact analysis can eliminate this stumbling block without asking engineers to do work outside their job description.
These experts are trained to identify when a task refers to culturally-specific background knowledge like rules of regional sports or products only available in some countries. Adverse impact analysis can also include examining metrics relating to candidate pass rates in different stages to ensure fairness across demographics. This kind of bias check is an important step in creating fair technical questions, but is just one of many ways to reduce bias in hiring.
Monitoring the question after release.
Similar to software, a technical question may need follow-up after it’s “released” by being introduced into a technical evaluation. Question leaks are an unfortunate reality of tech hiring, and part of monitoring includes routinely checking sites to keep technical assessment results unambiguous. Plus, job requirements evolve over time as companies restructure. This means technical questions relevant to one role won’t always stay relevant to that role.
Engineers already focused on delivering a product don’t have the capacity for this kind of work. Using a skills evaluation framework that produces thousands of variations for questions can cut down on the cost of each leaked question. Using a framework also lets you keep track of metrics related to how well questions assess certain skills, so you can pivot easily when the time comes.
Taking the time to rigorously go through each step is crucial to ensuring that technical questions fairly and accurately judge candidate expertise. Many of these steps in technical question creation would demand expensive additional engineering time or rely on skills that engineers are not trained for. CodeSignal Pre-Screen and Tech Screen are powered by Skills Evaluation Frameworks built and maintained by IO Psychologists and assessment design experts. They are guaranteed to provide structured and fair skills evaluation. If this sounds valuable to your team, you can schedule a discovery call here.