Ten years of experience, still failing phone screens

I fail phone screens pretty often, which limits my job prospects and is embarrassing for someone with ten years of industry experience, a pretty extensive Github account and a publicly available list of difference-making projects. 1 In theory a phone screen is supposed to evaluate whether a) this person would be good at the job being hired for and b) whether it's worth investing another five hours in trying to hire this person.2 In practice I think phone screens are pretty poor at screening candidates for fixable reasons.

I spend a lot of time setting up my development environment to get a fast feedback loop. If I write a little bit of code I want to know immediately whether it works or not. There are a lot of components that go into this, but generally, this involves being able to type quickly, have builtin editor support for building code, checking function definitions, and being able to background and foreground the terminal constantly to run tests or test scripts. All of that means I'm constantly able to exercise the code I'm running and get feedback about whether I'm headed in the right direction or where problems are. This is a pretty significant advantage for me and I can work more quickly than most engineers I know.

A lot of this effort was inspired by Gary Bernhardt, who has a video series that really changed the trajectory of my career and might change yours as well. Gary is extremely fast and is obsessed with test speed. Fast tests mean that he can get feedback on whether the code he wrote works a few hundred milliseconds after typing it.

Most phone screens involve implementing some fairly complex algorithm3 in one or multiple steps on Coderpad, an online text editing tool. With Coderpad you get a single file that you have to put all of your code in, and you can't load data from any external files. Most companies don't let look at any reference documentation either so you have to know the standard library API's cold or guess until you get them right.

Coderpad must work for some people since companies are able to hire a nonzero number of engineers, but it is definitely not a good fit for me. I'm not fast at reasoning about code, and I often make trivial mistakes just trying to get a "first draft" of a program out. If I get behind or the interviewer starts interrupting to ask about the bad code I'm writing I get very stressed and have trouble both listening to the interviewer and trying to address the issues with the code.4

The lack of any sort of feedback loop just totally compounds the problems. Most of the questions you are asked don't have a lot of different inputs, and have outputs that are 100% a function of the inputs, which are perfect for writing fast, table driven tests. But a lot of interviewers will explicitly tell you not to write tests, or in some cases even disable the ability to run the code, a Coderpad "feature." So you have to keep the different branches in your head and the different states the code can be in as you re-enter a for loop or whatever, and then figure out which of these changed when you did any sort of refactoring.

Objectively the screeners are making a good decision to fail me,5 I probably look like I have no idea what I'm doing and I'm only sometimes able to get a good result by the end of the interview. But this is a bit like declaring someone a bad sous chef after asking them to bake a rare pastry without a recipe. I am more likely to get an offer from an onsite interview than I am to pass an initial phone screen, which seems backwards.

I don't think that "can you code" algorithm implementation is the best thing to spend your entire hour evaluating during a phone screen, but even if you think it is, there are ways you can improve it significantly. Actually add a set of tests or allow the candidate to write or run a test suite, or run code in their own text editor and paste it into the Coderpad at the end. Offer the option of a harder question that still has a time limit but can be completed without having someone watching over their shoulder and asking whether that variable name is really necessary.

Anyway, I think a better idea would be to try to spend a shorter amount of time evaluating a larger range of topics. Triplebyte has a quiz that actually does a pretty good job of this; you get asked thirty or so questions about databases, networking, and simple coding questions ("what is the value of 'a' when you get to line 17?"). The last time I interviewed I took their quiz and did well enough that I was able to skip directly to onsites at every company they work with. Most of the value that Triplebyte provides comes from ensuring companies don't bounce good candidates by a bad phone screen or poorly calibrated interviewer; it's essentially a way to hack bad internal processes.

Besides the fact that people like me get extremely stressed implementing a new algorithm in a time limit without being able to exercise the code, while someone is watching and asking questions, there are diminishing returns to spending the 30th or 40th minute on the same topic, and this is such a tiny part of your job anyway. It would be better to try to ask a larger number of easily gradable questions that represent stuff most people will do in their jobs, and will give you more signal in a number of different directions. Some ideas:

  • Clone this source code repo and tell me (what timeout is set for X third party API/what endpoint you'd need to hit to do Z/any other information in the repo that can be determined with a yes/no answer).

  • Say you have a function that checks whether three integers are even numbers that increase. What tests would you want to write to tell whether it's true or not?

  • Why is it bad to write a file to the disk one byte at a time?

  • What can happen if you leave a database transaction open for a long time?

  • Here's a CRUD app, copy existing code patterns to implement a new CRUD endpoint (to be honest this is like a huge part of the job at most companies.)

  • Here's a stack trace from an open source project, can you tell me what went wrong?

  • What's a data race? Here's some simple code with a data race, can you tell me where the problem is?

  • Let's say you have a database table with a user ID and a balance denominated in integer numbers of cents, what's a SQL query that could charge ten dollars to the user's account?

  • What are some examples of good and bad production alerts?

  • In a few sentences, how does UTF-8 represent different types of characters?

  • Here's a HTML template with an XSS, what would you need to type into in the email address form to get an alert to pop up on the screen?

And others.6 You could have a rubric for each question, then add up the scores or require X number of nonzero answers. I think you'd get a better sense from those questions than you would a single 45 minute algorithm question of whether someone would do a good job at your company.7

1. Recently I had a phone screen at a company you've heard of go poorly and then have a VP intervene internally because the phone screen feedback made no sense based on my track record.

2. Another fallacy people tell themselves about the phone screen is "it's just to tell whether candidates can program" but if this was true interviewers would not fail people for correctly coding the brute force solution, or they'd more often provide the instructions for implementing an algorithm, which they never do.

3. I'm far from the first person to talk about this but the focus on algorithms just baffles me given how rarely you need to implement a complex algorithm in actual production code. Here's a job queue and a scheduler that's about 5000 lines of Go code and probably the most complex thing that happens is a for loop. There is a lot of thought that had to go into the API design to avoid long running transactions, and a lot of work that went into finding a fast and correct version of the dequeue query, but you'd never test for those in a phone screen, as currently designed. Another project I worked on for a client is a fully featured Twilio API browser, 9000 lines, that again does really not have any code that's as complex as would be tested in a phone screen. I used an LRU cache in one place, but I used an implementation that Brad Fitzpatrick wrote, there's no reason to write that myself.

4. This also happens when people ask me to write code on a whiteboard but that doesn't happen anymore due to COVID, and anyway I just tell companies that I need to write code on my laptop or I'm not going to do the interview and they will generally let me do that.

5. I shouldn't rule out the possibility that they are making the right decision and I am in fact bad at engineering. Certainly there are a lot of confident, mediocre tall white men in the tech industry. That said, I ran a successful one-person consulting business for several years, and presumably if I was not delivering more value to each company than I was invoicing, the companies I was working with (Segment, Notion, ngrok, One Medical) would not have renewed my contract multiple times.

6. If this sounds interesting, and you'd like help designing a phone screen experience that's tailored for your company and the types of experience you want to screen for, get in touch!

7. I have a bunch of ideas around testing how quickly and how accurately people can type, or accurately transcribe text from a second screen, that I think could be decent predictors of job performance but I'm not sure are fair to test out in a professional context without academic research that could validate them at least a little bit.

Liked what you read? I am available for hire.

2 thoughts on “Ten years of experience, still failing phone screens


    Current interviews are failing both the candidates, and the businesses trying to hire them.

    No 1 hour interview can predict how you’re going to perform on the job.

    Any interview that spends large amounts of time arguing about angels dancing on the head of a pin is a pure waste of everyone’s time.

    Companies seem to want people to argue that dances on the head of a pin thing in a misguided attempt to hire “the best”, when what they should be aiming for is “the most balanced”.

    I don’t want a team of 10 superstars. I want a team of 10 people who, though perhaps not superstars, support, listen to, and protect each other and consistently work together to deliver a common goal.

    My philosophy to hiring (that I’ve used for software engineers):
    1) Ask enough questions to determine do they know how to turn on the computer, have *basic* skills, and can demonstrate their natural curiosity gets in.
    2) Have a probation period. I’m serious about this. This is where you get to see if the person really is a good match for the organisation. Can they mesh with their team, can they learn, can they deliver, can they teach, do they make the world around them a little brighter?

    If the probation highlights issues, lack of creativity, lack of curiosity, or a contractor mentality of simply moving jira tickets along, It’s been a blast but see ya!

    Companies will learn, but it’s going to take a long time.

  2. Jerome

    Definitely, I am in the same case but on the sys/devops side.
    I have implemented/managed/operated so much stack and often, ones I didn’t want but made another bullet point on my resume, that it is too difficult for me to squeeze that in a 1 hour ITW (if you are lucky to get such amount of time and not cropped by the discussion about background).

    Unfortunately, when I receive a role – which often cover dozen and dozen of tech in different context from a 15y work history, and you want me to elaborate on them with the same accuracy in 30-45min, I can fail miserably. Getting 1 (or 2 if lucky since covid) interview per month for different role in the same field, is not helping.

    In fact, fail is not that miserably, often the excuse is, you don’t have enough prod xp with tech n-1 despite having more complex/challenging stuff on your my belt.

    I think it is easier for a software developer. Patterns, methodologies are more easily transferable IMHO.


Leave a Reply

Your email address will not be published. Required fields are marked *

Comments are heavily moderated.