The software engineering interview process can be extremely challenging for candidates and companies alike. There are countless ways to try to get signal on every variable for a candidate. In an attempt to vet for everything, some interview processes turn into a mix of everything from a resume review, to logic puzzles, to solving the traveling salesman problem during a technical screen.
At Hardfin, we’re moving fast to solve the increasingly complex challenges of Hardware-as-a-Service. Sure, it’s important for candidates to know how to write code, but what’s equally important is how well they can apply their skills and knowledge in real-life situations.
That’s why our first technical interview is a phone screen that dives deep into real situations candidates have faced in their careers. We believe that true proficiency as a software engineer goes beyond coding alone. It’s about being adaptable, innovative, and able to collaborate effectively to solve real problems in a dynamic, team environment.
To kick things off, we provide candidates with a variety of prompts covering different situations they might have encountered during their careers. We ask candidates to choose two prompts and be prepared to dig deep into those experiences during the interview. These prompts are communicated well in advance of the interview to give candidates time to choose what resonates with them the most and recall details of the specific situations they’d like to discuss.
During the actual interview, our main objective is to understand the candidate’s role in those specific situations and the impact they had. We want all the juicy details! We want to know about their specific responsibilities, how they make decisions, and the results they achieved. We’re also interested in how candidates work with external stakeholders. Do they collaborate well with other teams, clients, or partners? We want to know about their teamwork skills, their ability to communicate effectively, and how they adapt to different working relationships. And while we understand that very few projects are completed in a silo and candidates were most likely working as part of a larger group in the projects being discussed, we focus the discussion on the candidate’s specific role in these situations and their specific contributions.
We provide 11 prompts to help candidates identify specific situations to discuss:
- An interesting bug, how you found it and what the fix was
- An interesting outage, what the root cause was, what you learned and how you addressed issues after
- A time you handed off a codebase to other people
- A project where you did much more than was expected of you
- A time when you took ownership of a project
- A significant project that came via an idea that you had
- A major unexpected issue during a project, and how you got around it
- A time when you juggled multiple concurrent projects
- A time when you worked on a project that had a major course correction midway
- A time you had to present a complex topic to junior engineers, another team, other stakeholders, etc.
- Something interesting and surprising you learned recently
Sometimes candidates ask why we offer so many prompts to choose from. Not every candidate will have a story about solving an impossible bug or building a groundbreaking feature from scratch. But that doesn’t mean they haven’t made a real impact in other areas in their previous roles!
By giving candidates a variety of prompts, we want to provide the opportunity for them to spark creativity in their interview and let them shine by sharing their proudest achievements. We want to hear about the projects they’ve worked on that really show off their abilities and the positive impact they’ve had.
Hardfin phone screen examples
Here are two short examples demonstrating how a Hardfin phone screen might go — one we would consider a great interview and one we would consider poor — based on what we look for from candidates.
Prompt: An interesting bug, how you found it, and what the fix was
A great phone screen
A user reported a bug in a library I maintain. The evidence of the bug was an
obscure HTTP response when the library requested a JWT, rather than an errant
code path. After some back and forth, it was clear that I’d need to be on the
failing machine to really determine what was happening, due to the user being
able to execute the code on OS X but not on EC2. After hopping on screen share
with the bug reporter, validating the JWT body, and verifying the inputs, I
realized that certain timestamps were drastically different between EC2 and OS
X. This led to the realization that the clock had drifted on the EC2 instance
and the time was wrong. In fact, it was 450 minutes in the past. Because of
this, the library was asking for tokens that expired 390 minutes in the past.
Once I realized this fact, the fix was as simple as running
sudo ntpdate -s ntp.ubuntu.com on the EC2 instance to correctly set the
Questions and answers
- What about the back and forth with the bug reporter made you feel you needed to be on the failing machine? I had exhausted all other options, such as reinstalling the library and attempting to replicate the bug on my own machine, and the same code and same credentials worked on a different machine.
- Can you walk through some of the debugging steps you took with the user? When screen sharing with the bug reporter, I used Python in interactive mode to step into the code right where the exception occurred. I checked to ensure that the same JWT was being generated for the same inputs on both OS X and EC2.
Why we consider the above a great response
- The response focuses on what the candidate themselves did, not on what others did or what generally happened
- The response goes into detail on specific actions the candidate took, key realizations they made, and their solution. There is a similar level of detail in the answers to the follow-up questions
A poor phone screen
A user reported a bug in a library I maintain. We realized that the inputs being used to generate the result were not the same. This led to the realization that the clock had drifted on the machine. We fixed the clock skew and that resolved the bug.
Questions and answers
- Can you describe in more detail what the issue was? The user was getting an unexpected HTTP response from the library.
- What made you decide to debug this issue with the user over screen share? After talking with the user for a bit, I still did not have enough information to debug.
- Can you walk through some of the debugging steps you took with the user? We ran the code on their dev machine, ran it on the EC2 instance they were using, and compared the results to see if they were the same.
Why we consider the above a poor response
- The response uses “we” instead of focusing on the candidate’s specific contributions in the scenario
- The response contains very little specific detail on the bug, the debugging steps, and the solution. There is a similar lack of detail in the answers to the follow-up questions
These examples are based on an article written by our Head of Engineering, Danny Hermes, about one of the more interesting bugs he’s encountered in his career.
An engineer’s ability to solve real problems in a team environment is the most important skill for a software engineer at Hardfin. That’s why it’s the first thing we look for in our interview process. This interview format helps us identify individuals who have the technical expertise and practical, team-oriented mindset needed to tackle the engineering challenges we face at Hardfin. And while there is a coding component to the Hardfin interview process, we do not think coding competency is the only thing we should be looking for in a software engineer.