Just weeks ago, remote work was a luxury or a benefit for many; overnight, it has turned into a business imperative. But even as cities shut down schools and companies close offices, the demand for software engineering talent isn’t slowing down.
Karat conducts technical interviews for companies hiring software engineers. We’ve conducted over 60,000 remote technical interviews, and as head of Karat’s solutions engineering team, it’s my job to help clients develop structured interview processes that align with their hiring bars, assess core competencies, and get the most predictive hiring signal. I’ve spent the last few weeks meeting directly with engineering leaders and hiring managers who are understandably anxious about how the lack of onsite interview loops will impact technical hiring.
Every company that is moving to remote hiring should consider three crucial concepts: competencies, interviewers and measurement.
The biggest challenges with code tests and remote interviews are that candidates don’t know what’s being assessed or how they’re being measured.
Assign a specific owner to review each job’s description and responsibilities, then align these elements to competencies. It may be helpful to think about how the person is going to be evaluated in their review at the end of the year, and what skills they will need to be successful (this will also help when it comes time for onboarding).
Once you know the competencies that are being assessed, it’s critical that your remote interview questions evaluate one competency at a time — otherwise, you’ll introduce noise and false negatives into the process.
Also, avoid ambiguity. Assess the competencies, not a candidate’s mind-reading abilities. Be explicit about what you’re asking the candidate to do. If you have a coding question, make it clear if you’re looking for functional code, optimality, or speed. And if you want them to test it, tell them rather than introducing false negatives by marking down a candidate for not doing something that wasn’t asked of them.
For example, a well-communicated technical interview question would sound something like this: “In the next question, we’re looking for you to demonstrate your ability to manipulate data sets. We’re looking for a working program, and optimality will be considered but is not a priority. Afterward, we’ll have a conversation about how you might test your program and what edge-cases might be.”
These methods of assessing competencies are also best practices for in-person interviews, but, because it’s more difficult to read body language and clarify minor points in a remote setting, these are even more critical than ever.
Remote interviewers need to be competent technical evaluators, but they must also display kindness, empathy and adherence to clear guidelines. This helps put candidates at ease and lets them demonstrate their true skillset. In remote interviews, it’s especially important to start the interview by building rapport with the candidate. We start every interview with project conversations that give candidates the chance to explain their best work examples. This gets them comfortable with the virtual environment and builds confidence ahead of coding questions.
At Karat, we have a dedicated community of interview engineers — software engineers whose job it is to conduct technical interviews. Like any other profession, they get better with practice, and we quality control their performance and mentor them to be better over time. We coach them to be aware of bias-inducing handholding, and to give the right assistance or hints when appropriate. A carefully delivered hint can massively skew a candidate's performance, so it’s important to set clear expectations and make sure interviewers stay within guidelines.
One advantage of remote/video interviews is that you can record them, not just to assess the candidate more fairly, but also to review and coach your interviewers on how to get better.
As an engineer, I’ve found that measuring performance consistently is the most important thing. While subjectivity and bias are by no means absent from in-person interviews or hiring roundtables, aggregating inconsistent feedback from a network of remote interviewers is a surefire way to ruin your hiring signal.
First, to generate usable interview data, interviewers must make observations rather than conclusions. A good observation is that “candidate X was able to write fully functional and optimized programs for the first two questions with moderate debugging, but ran out of time on question three.” A potentially bias-inducing conclusion would look more like “candidate X had several time-consuming bugs in early questions and, as a result, was unable to complete the assignment.”
Second, make sure everyone is using the same language to describe candidate performance. For instance, if I told you that BuiltIn.com was a good resource, a pretty good resource or a great resource, you’d have a decent idea of what I meant. But if you have 12 different interviewers who say a candidate is OK, strong, pretty good, great, it’s a lot harder to pin down a hiring signal.
Create a structured scoring rubric so everyone is evaluating on the same scale and speaking the same language. For each competency, we use drop-down menus with clear performance observations to limit the variables that interviewers can introduce. This creates a consistent hiring bar across office sites, homes, countries ... wherever your hiring managers may be.
Every company should create a structured interview process that clearly defines and communicates competencies; trains and reviews interviewers; and consistently measures performance. If you do, you’ll get a much more predictive hiring signal, which will justify the amount of time you’re having your engineers spend in interviews.
Going remote and distributing the hiring process will quickly amplify any inconsistencies in your current process, so it’s worth developing and committing to a plan now, before losing your hiring signal altogether.