We all love technical interviews. Especially the whiteboarding, the tech grilling, the soft killing and a whole host of irrelevant conversation fillers. Sometimes it almost feels like an awkward first date.
An interrogation technique for an interview doesn't work very well, it should instead be more like having a conversation with a psychologist. An exploratory journey into the mind of a serial killer (of bugs).
For my team at Ceridian we wanted to come up with a better way to interview software developers, because the process that we currently have does not provide sufficient breadth and depth to establish an accurate representation of a candidate's aptitude and potential. The process I came up with is a combination of many of the "right" things that other companies do and rejecting the "wrong" things that give us little to no information. The goal is to try to evaluate candidates instead of testing them and to figure out how well they fit within the engineering culture of our team.
We were looking to fill a full-time senior software engineering position. The process outlined below is based on our experiences from Autodesk, Microsoft, Amazon and a host of other interviews from different companies, Silicon Valley or otherwise.
NOTE: We are not trying to find just a good coder. We are trying to find an engineer, an architect, a developer, a communicator and a team player.
No, not the candidate, the interviewer! It is immensely disrespectful and unprofessional to show up to conduct an interview without researching the candidate first or taking preparation on how to conduct an interview.
In my team, I setup mock interviews with every team member who were going to be part of the interview process. Everyone needs to be able to speak clearly, succinctly, in a friendly manner and evaluate the candidates' responses on the fly. An observer takes notes on the interactions between an interviewer and a candidate in a mock setting, and discusses the questions, the pace, hints, and clarity afterwards.
Remember all those interviews where you felt that the interviewer was hard to understand, or questions were unclear, or was all over the place, or didn't even know your background? That's because they didn't prepare.
Every question must have a well-defined purpose and a quantifiable way to evaluate the response. The metric of a good question is how often a developer should use that concept and how deeply does it affect the quality of the outcome.
Object-oriented principles, resources, references, algorithmic aptitude, explaining in abstraction, unit tests, refactoring techniques... these are all good topics because these things matter everyday and affect the quality of the software and the team at every level.
The whiteboard coding collaboration and the design collaboration should reflect the type of challenges that are solved in the team/company, but the problems must be posed in a general way.
I came up with the following technique for the actual interview process. We have a 45 min phone screen followed by a 3 hour on-site on another day. We don't do any coding challenge during the screening (for a senior dev), we just get to know the candidate and ask a few technical basics.
Our on-site is composed of many different flavors:
It may sometimes be necessary to cut the candidate off (politely) as soon as we have the gist of the answer.
Some data structure or algorithm questions are appropriate here. We limit our questions to arrays, hashtables or strings. Linked list, graph and binary tree related questions may also be asked here, but only if it is necessary. Copying Google/Microsoft blindly doesn't serve anything useful.
Not only are we interested in the solution to the problem and the relevant unit tests, we also assess communication skills here.
NOTE: Syntax is not important, no matter what you think. Nitpicking on non-sense is extremely inefficient when you have 2-3 hours to evaluate a person with 10-15 years of experience.
I ask about architecture, design principles and patterns, code quality and testing techniques.
Here I also try to learn about the candidate's overall algorithm knowledge, by asking a few algorithm or optimization questions verbally. These are not scenario questions, just simple array, hashtable knowledge exploration.
Please DO ask fun system design questions here. We usually ask things like: design a drone delivery system, design a parking lot for self-driving cars, design a voice-based personal assistant, etc.
We avoid questions where the candidate may have to come up with a fabricated answer, such as, "describe a time when you had a conflict with a team member", or "what is your greatest weakness".
Instead we ask, "how do you keep up with current technology trends", "do you have side-projects", "what do you read", "how would you help a new hire onboard and mentor", "what do you value in your work", etc. These have to be genuinely asked, and assessed, not just as filler questions.
I have created a matrix of dimensions on what topics we should cover in our interviews and how much weight to apply on each topic. This matrix is customized for each candidate based on their experience and skillset.
This visual guide condenses all our information in one line and can be used even weeks later to make a decision on the candidate with a simple glance.
It can also be used by other teams if they are looking for an engineer without having to go through the entire interview all over again.
References
How to Interview a Software Architect
The anatomy of the perfect technical interview from a former Amazon VP
Interviews at Google - Moishe Lettvin
Rethinking how we interview in Microsoft’s Developer Division
The one question I ask in every interview
An interrogation technique for an interview doesn't work very well, it should instead be more like having a conversation with a psychologist. An exploratory journey into the mind of a serial killer (of bugs).
For my team at Ceridian we wanted to come up with a better way to interview software developers, because the process that we currently have does not provide sufficient breadth and depth to establish an accurate representation of a candidate's aptitude and potential. The process I came up with is a combination of many of the "right" things that other companies do and rejecting the "wrong" things that give us little to no information. The goal is to try to evaluate candidates instead of testing them and to figure out how well they fit within the engineering culture of our team.
We were looking to fill a full-time senior software engineering position. The process outlined below is based on our experiences from Autodesk, Microsoft, Amazon and a host of other interviews from different companies, Silicon Valley or otherwise.
NOTE: We are not trying to find just a good coder. We are trying to find an engineer, an architect, a developer, a communicator and a team player.
Phase 1: Prepare yourself for the interview
In my team, I setup mock interviews with every team member who were going to be part of the interview process. Everyone needs to be able to speak clearly, succinctly, in a friendly manner and evaluate the candidates' responses on the fly. An observer takes notes on the interactions between an interviewer and a candidate in a mock setting, and discusses the questions, the pace, hints, and clarity afterwards.
Remember all those interviews where you felt that the interviewer was hard to understand, or questions were unclear, or was all over the place, or didn't even know your background? That's because they didn't prepare.
Phase 2: Prepare a set of relevant questions
Object-oriented principles, resources, references, algorithmic aptitude, explaining in abstraction, unit tests, refactoring techniques... these are all good topics because these things matter everyday and affect the quality of the software and the team at every level.
- Don't ask a question because of a buzzword that you haven't done your research on.
- Don't ask trick questions! They don't provide any insight, neither in an interview nor in life.
- Don't ask questions that rely on remembering information, such as, the folder structure of a certain type of project, the codes for some responses, the exception types, library functions, etc.
- Don't ask questions that rely on language syntax.
- Don't ask how many years of experience they have using this framework or that library (for a full-time position). Ever wonder why Google, MS, Amazon, etc. never ask these questions?
- Don't read out a scenario or a problem to the candidate. Ask the question on a whiteboard or at least in a conversational manner.
There's one crucial element here that many interviewers are not aware of. Even if the candidate can answer these kind of questions, there is a very high probability that a very good candidate will not work for you, because any self-respecting software engineer who is not desperate to find a job will see these kind of questions as a lack of maturity and quality of the team. However, if you find satisfaction in life with mediocrity, then by all means, shoot these questions.
Phase 3: Communicate the interview process to the candidate
Communicate clearly with the candidate what is expected of him/her at least a week before the onsite interview, as well as the different parts of the interview and the topics that will be covered. The candidate must have a clear idea of the complexity of the problem solving exercise, the depth of knowledge on certain topics and what will NOT be covered. You must value a candidate's time and effort to prepare and show up at an interview by providing all the relevant information for preparation. If you are not going to ask about binary trees or advanced database questions, DO mention it, if you are going to ask about JavaScript frameworks, DO mention it. There is no point in trying to catch a candidate off-guard.
Phase 4: The Interview
Our on-site is composed of many different flavors:
- Rapid Fire (15 mins)
It may sometimes be necessary to cut the candidate off (politely) as soon as we have the gist of the answer.
- Whiteboard coding collaboration (45 mins)
Some data structure or algorithm questions are appropriate here. We limit our questions to arrays, hashtables or strings. Linked list, graph and binary tree related questions may also be asked here, but only if it is necessary. Copying Google/Microsoft blindly doesn't serve anything useful.
Not only are we interested in the solution to the problem and the relevant unit tests, we also assess communication skills here.
NOTE: Syntax is not important, no matter what you think. Nitpicking on non-sense is extremely inefficient when you have 2-3 hours to evaluate a person with 10-15 years of experience.
- Deep dive on some topics (30 mins)
I ask about architecture, design principles and patterns, code quality and testing techniques.
Here I also try to learn about the candidate's overall algorithm knowledge, by asking a few algorithm or optimization questions verbally. These are not scenario questions, just simple array, hashtable knowledge exploration.
- Whiteboard design collaboration (45 mins)
Please DO ask fun system design questions here. We usually ask things like: design a drone delivery system, design a parking lot for self-driving cars, design a voice-based personal assistant, etc.
- Conversation about culture, interests, expectations (45 mins)
We avoid questions where the candidate may have to come up with a fabricated answer, such as, "describe a time when you had a conflict with a team member", or "what is your greatest weakness".
Instead we ask, "how do you keep up with current technology trends", "do you have side-projects", "what do you read", "how would you help a new hire onboard and mentor", "what do you value in your work", etc. These have to be genuinely asked, and assessed, not just as filler questions.
Phase 5: Develop a strategy to evaluate and compare candidates
The blue boxes represent topics that are more relevant than the white boxes for a particular candidate.
On average, a topic will involve about 3 questions. The interviewer will assign a score based on the complexity of the question and the candidate's background. Let's analyze the scoring scheme: eg. 8/3/1
- A score under each topic represents the maximum a candidate can get. 8 in Optimization means they don’t have to be super good at it (10), but still have to be pretty good for our purpose (8), but even if they score 10 out of 10 questions, they still get 8. The extra 2 points will be used as bonus during tie-breaking.
- A red score for each topic represents the minimum score they have to achieve in order to pass that topic/interview. We will not blindly rely on the scoreboard alone, but this matrix will serve us as a quantifiable guideline for our decision making process.
- The * sign beside a Dimension means a score of 0 in every topic under that Dimension disqualifies the candidate (again, a review based judgement will be made nonetheless).
- A 10% difference between 2 candidates will be considered a tie. Bonus points will then be used to break the tie.
- The blue scores are weights used for tie-breaking, but is only used to weigh the bonus points.
- On the Leadership dimension, both Mentoring and Communication involve full technical aspects, hence the high scores. It also evaluates the cultural fit of the candidate.
Phase 6: Decision Making
After we have filled up the dimensions matrix we create a visual representation of the fit of the candidate in our team, which looks something like this:
This visual guide condenses all our information in one line and can be used even weeks later to make a decision on the candidate with a simple glance.
It can also be used by other teams if they are looking for an engineer without having to go through the entire interview all over again.
Final Thoughts
Although the process outlined here may seem similar to how many companies conduct interviews these days, there are subtle but extremely important differences in the way we do it in our team. The type of questions matter. The interactions between the interviewer and the candidate matters. The focus on what's important about an answer and what's not matters. The breadth and depth of questions asked matters. Proper communication with the candidate BEFORE and AFTER the interview matters. Self-assessment as the interviewer matters. These are the differences that make this process effective.References
How to Interview a Software Architect
The anatomy of the perfect technical interview from a former Amazon VP
Interviews at Google - Moishe Lettvin
Rethinking how we interview in Microsoft’s Developer Division
The one question I ask in every interview
Howdy, i read yyour blog occasionally and i own a similar one and i was just wondering if you get a loot of spam remarks?
ReplyDeleteIf so hoow do you prevent it, any plugin or anything you can advise?
I get so much lately it's driving me mad so any assistance is very much appreciated.