How have engineering teams at companies like Uber and Robinhood grown so much faster than the competition? One of the secrets is not writing all their own interview questions to handle thousands and thousands of phone screens. Instead, they use skills evaluation frameworks.
If you’re hiring engineers in today’s competitive market, it’s worth spending a moment to understand what skills evaluation frameworks are. They have several important benefits, including reducing the work for the engineering team, lowering the chance of interview plagiarism, and making sure the candidate experiences questions that are relevant and fair — rather than sourced from LeetCode at the last minute!
What is a skills evaluation framework?
You can think of a skills evaluation framework as the blueprints that describe the content you will need — in other words, the particular types of interview questions and coding tasks — to measure a candidate’s readiness for a given role. Your organization might use one framework for junior engineers and a different framework for a specialized role like data science. When you assess a candidate, the blueprints for the interview will be filled in with questions and coding tasks, which are purposefully varied for each candidate to avoid plagiarism.
While you can create frameworks on your own, it’s more common to use frameworks from an assessment vendor that employs IO Psychologists, assessment researchers, and assessment design engineers. This removes from your team the burden of maintaining and creating questions, and ensures that the framework is developed to be fair and unbiased.
Skills evaluation frameworks are primarily about the content of what skills and knowledge you assess, and not the method of how you assess it. The method itself can vary from an asynchronous take-home assessment to an in-person interview, and different assessment vendors may provide different options within their platform.
Why use a skills evaluation framework?
It’s natural to wonder what the benefit of using blueprints might be — especially when you’re used to constructing technical interviews ad hoc. Here are the advantages we’ve observed in working with engineering leaders at some of the largest tech companies in the world.
1. Combats plagiarism automatically
Engineers love to help other engineers, but unfortunately for hiring teams, this means that interview questions are constantly being leaked online. You might not know if your latest candidate performed well because they did a Google search for your company’s interview questions the night before. The best defense against plagiarism is to create more interview questions, faster — but most engineers don’t have time for that on top of their regular jobs.
Skills evaluation frameworks are designed to ensure no two interviews look alike. Each question or task has countless variations, while remaining fair and balanced for candidates. Ultimately, no interview questions are leak-proof, but when you have a third party maintaining your skills evaluation framework, your team doesn’t have to monitor online forums and update questions when necessary — the vendor should take care of that for you.
2. Monitored and maintained on an ongoing basis
In addition to being monitored for plagiarism, skills evaluation frameworks can be monitored to ensure that questions are consistent and continually updated to reflect the latest skills and knowledge needed in the market. Some vendors can even conduct a job performance analysis for recent hires and link it back to what the framework assessed for originally.
Interviews need to be constantly updated and maintained to provide an effective signal, and frameworks make it possible to do this work methodically and in collaboration with subject matter experts. The alternative is often that a team haphazardly changes their interview questions and rewrites their take-home project assignment every few years.
3. Lets you become more data-driven
When you use the same framework over and over again, especially over a large candidate pool, you end up with a substantial amount of data that is useful to recruiters and hiring managers alike. You can establish a scoring distribution and confidently set certain thresholds, like the scoring cutoff for moving candidates forward from a tech screen to an on-site interview. You’ll even know what your expected pass-through rate will be if you move the threshold up or down.
If you work with an assessment vendor, they should be able to share additional data about the framework, such as benchmark scores for particular roles and levels across your particular industry. When you’re building interviews from scratch, you can’t do these kinds of analyses. Seeing how your candidates compare to your competitors’ can help inform your recruiting and sourcing strategy.
How skills evaluation frameworks are built
So, what goes into a skills assessment framework that makes it so useful? Let’s shed some light on the process of developing a framework.
1. Identifying the core skills for a role
To build a framework for a particular engineering role and level, an assessment research team (which should include PhD Industrial-Organizational Psychologists) will begin by closely collaborating with subject matter experts: experienced engineers who are intimately familiar with the role’s requirements. In doing so, the following questions are essential:
- For this specific role or level, what are all the skills and knowledge that they need to have? For example, for CodeSignal’s Industry Coding Framework, our research revealed that skills like refactoring and being able to review existing code are necessary for success as a mid- to senior-level engineer.
- What’s absolutely required vs. not really required across the board? A lot of things are “nice to have,” but not necessarily core to the role and level. For instance, it might be nice if a candidate is comfortable programming in React, but is it really a requirement to succeed in an entry-level frontend developer role?
2. Constructing the framework
From this conversation, assessment research teams will come up with the general structure and building blocks of a framework. Here, we’ve found it’s essential to put yourself in the candidate’s shoes. From their point of view, we want the content of the assessment to progress naturally. We also want to balance the need for a positive candidate experience with the team’s need for a strong signal. Even if a candidate doesn’t perform well on the interview, we still want them to walk away feeling that the questions were fair and reflected what they’d be required to do on the job.
3. Validating the framework
Next, the proposed framework needs to be validated with the subject matter experts. They’ll be asked to weigh in on whether the framework captures the right skills and is fair and accurate. It can take some time and research for everyone to come to an agreement and before the framework can be rolled out to candidates. Throughout the rollout, it’s essential to monitor for candidate feedback, consistency, and fairness and relevance signals.
4. Introducing variations
Variations are the best way to combat plagiarism and leaking, which is critically important for a team to scale. Even before validation, the assessment design team may start to introduce variations to the content that are more minor in nature — and therefore unlikely to introduce scoring differences between candidates with the same skill set. Over time and with careful testing for fairness, variations can become bigger to ensure the framework remains robust.
Coming up with interview questions and coding tasks is a lot of work, especially when you’re trying to hire at scale. Skills evaluation frameworks not only reduce this burden and help you grow faster — they can also bring more consistency and relevance to your interviews. If you’re interested in learning more about leveraging CodeSignal’s skills evaluation frameworks and working with our assessment design team, sign up for a free demo here.