I am sorry to hear that you had bad experience. Our interview process is a trade-off and has one big downside - it may take more time and efforts compared to classic interviews. It could also feel disappointing if the team does not vote in favor of the candidate's application.
However, if there was something else wrong with your experience and you are willing to share, please send me an email to sasha@goteleport.com.
non-involved opinion here - it appears that a self-confident and clearly communicating C*O person is explaining exactly why the company is completely correct, while evidence of at least two actual non-company people show examples of this not being the case. Isn't it common for self-assured execs to explain away all the objections of outsiders, despite evidence directly presented? looks like it here $0.02
No specific criticism of the process was offered, so a general justification is warranted.
Personally I became interested in working for Teleport in large measure because the interview process tested my practical skills, rather than having me pull leetcode trivia out of my ass. I haven’t regretted my decision whatsoever, all of my engineering teammates here that I’ve worked directly with are very responsible and competent and the company appears to be growing mostly in the right directions.
I like Teleport. If you're doing work samples, why is your team voting in favor of applications? Part of the point of work samples is factoring out that kind of subjectivity.
That's a fair question. The team votes on specific aspects of implementation that can not be verified by running a program, for example:
* Error handling and code structure - whether the code processes errors well and has a clear and modular structure or crashes on invalid inputs, or the code works, but is all in one function.
* Communication - whether all PR comments have been acknowledged during the code review process and fixed.
Others, like whether the code uses good setup of HTTPS and has authn are more clear.
However, you have a good point. I will chat to the team and see if we can reduce the amount of things that are subject to personal interpretation and see if we can replace them with auto checks going forward.
We're a work-sample culture here too, and one of the big concerns we have is asking people to do work-sample tests and then face a subjective interview. Too many companies have cargo-culted work-sample tests as just another hurdle in the standard interview loop, and everyone just knows that the whole game is about winning the interview loop, not about the homework assignments.
A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.
That's a fair concern. We don't have extra steps to the interview process, our team votes only on the submitted code. However, We did not spend enough time thinking about automating as many of those steps as possible as we should have.
For some challenges we wrote a public linter and tester, so folks can self-test and iterate before they submit the code:
The good news is, if you've run this process a bunch of times with votes, you should have a lot of raw material from which to make a rubric, and then the only process change you need is "lose the vote, and instead randomly select someone to evaluate the rubric against the submission". Your process will get more efficient and more accurate at the same time, which isn't usually a win you get to have. :)
Disclaimer: I'm a Teleport employee, and participate in hiring for our SRE and tools folks.
> A rubric written in advance that would allow a single person to vet a work sample response mostly cures the problem you have right now. The red flag is the vote.
I argue the opposite: Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag.
The one engineer vetting the submission they may be reviewing before lunch or have had a bad week, turning a hire into a no-hire. [1] Not a deal breaker in an iterated PR review game, but rough for a single round hiring game. Beyond that, multiple samples from a population gives data closer to the truth than any single sample.
There is also a humanist element related to current employees: Giving peers a role and voice in hiring builds trust, camaraderie, and empathy for candidates. When a new hire lands, I want peers to be invested and excited to see them.
If you treat hiring as a mechanical process, you'll hire machines. Great software isn't built by machines... (yet)
If you really, honestly believe that multiple human opinions and a consensus process is a requirement for hiring, I think you shouldn't be asking people to do work samples, because you're not serious about them. You're asking people to do work --- probably uncompensated --- to demonstrate their ability to solve problems. But then you're asking your team to override what the work sample says, mooting some (or all) of the work you asked candidates to do. This is why people hate work sample processes. It's why we go way out of our way not to have processes that work this way.
We've done group discussions about candidates before, too. But we do them to build a rubric, so that we can lock in a consistent set of guidelines about what technically qualifies a candidate. The goal of spending the effort (and inviting the nondeterminism and bias) of having a group process is to get to a point where you can stop doing that, so your engineering team learns, and locks in a consistent decision process --- so that you can then communicate that decision process to candidates and not have them worry if you're going to jerk them around because a cranky backend engineer forgets their coffee before the group vote.
I don't so much care whether you use consensus processes to evaluate "culture fit", beyond that I think "culture fit" is a terrible idea that mostly serves to ensure you're hiring people with the same opinion on Elden Ring vs. HFW. But if you're using consensus to judge a work sample, as was said upthread, I think you're misusing work samples.
You can also not hire people with work samples. We've hired people that way! There are people our team has worked with for years that we've picked up, and there are people we picked up for other reasons (like doing neat stuff with our platform). In none of these cases did we ever take a vote.
(If I had my way, we'd work sample everyone, if only to collect the data on how people we're confident about do against our rubric, so we can tune the rubric. But I'm just one person here.)
Finally: a rubric doesn't mean "scored by machines". I just got finished saying, you build a rubric so that a person can go evaluate it. I've never managed to get to a point where I could just run a script to make a decision, and I've never been tempted to try.
I'll add: I'm not just making this stuff up. This is how I've run hiring processes for about 12 year, not at crazy scale but "a dozen a year" easily? It's also how we hire at our current company. I object, strongly, to the idea that we have a culture of "machines", and not just because if they were machines I'd get my way more often in engineering debates. We have one of the best and most human cultures I've ever worked at here, and we reject idea that lack of team votes is a red flag.
Strongly agree with this, two key concepts in particular:
1. Using group discussion to make the principled rubric is incredibly respectful of everyone’s (employee and candidate) time, not just now but future time. Using the rubric is also unreasonably effective at getting clearer pictures of people quickly.
2. Systematic doesn’t mean automated, and that hiring should aspire to be systematic to the point it makes no difference who interviewed the candidate, and all the difference which candidate interviewed.
I’ll add one …
3. If you have a rubric setting a consistent bar, share feedback with the candidate in real time (such as asking to ‘help me understand your choice I might have done differently?’) as well as synthesized feedback at the end: “This is my takeaway, is it fair?”
Contrary to urban legend this never got us sued. Every candidate, particularly those being told no, said it was refreshing to hear where they stood and appreciated the opportunity to revisit or clarify before leaving the room. Key is non judgmental clear synthesis with, “Is that fair?”
You’re mistaken, we do have a rubric. All of the members of the interview team grade the interviewee according to the rubric, and the scores are then combined into “votes”.
That's good. I'm responding to "Not having multiple human opinions and a hiring discussion/vote/consensus is a red flag". I think having combined scores is an own-goal, but having people vote based on their opinions is something worse than that (if you're having people do work samples).
Here's what I think it boils down to: working on a codebase with your coworkers is (or at least certainly should be) an inherently collaborative process. On the other hand, a job interview is, in a sense, inherently antagonistic. No matter what shape the interview takes, these people aren't your friends, they aren't your coworkers, they are gatekeepers.
I already have a job as a programmer. At work, I can push back on my coworkers and debate the merits of various designs until we all reach a consensus. But with the Teleport interview, there's an inherent power imbalance that makes that impossible: "I'd really like to argue about this, because I don't think I agree, but I'm afraid that will decrease the chances of them hiring me."
And the only people who are in a position to change this process are the ones who have already gotten through it successfully.
From my perspective you’re unfairly projecting bad faith onto Teleport and shooting your self in the foot in the process.
1) You’re assuming that a good faith argument would decrease the chances of us hiring you, but for the most part that isn’t the case. We’re an engineering company building a complex security product — the only way that can be done well is via a culture that’s perennially open to criticism, debate, and going with the better argument. In my tenure at Teleport, I’ve never experienced explicit or implicit punishment for voicing my opinion, even when it contradicted a more senior engineer’s opinion. The argument has always been evaluated on its merits and the correct option taken. An interviewee making a good argument and proving an interviewer wrong should, and based on my experience would, increase your chances of being hired.
2) I can imagine you retorting that even if that’s truly the case at Teleport, there’s no way you could know that beforehand, and due to the “antagonistic” nature of us being the “gatekeepers”, you’re forced to assume the worst. But if your goal is to work in a collaborative environment where criticism and debate is tolerated, then your implicit strategy makes no sense. If Teleport is that type of place you’d like to work, then pushback in the interview process will be well received; if it isn’t, then you won’t even get an offer. So you have nothing to lose by giving your true opinion, but if you assume the worst and self censor in an attempt to brown nose the hiring team, you risk ending up in a shitty work environment that you were hoping to avoid.
Yep imbalance, dynamics, so much to skew the process. If you think your interview process works, great, but likely it doesnt and you just get lucky. All the good people you screened out vs all the cruft you saved yourself from.. you will never know....!
Being a programmer isnt about what you know, its about how you learn. Born programmers vs learned programmers, you got a coding test for that? really? If you think you can screen anything more then selecting for familiarity; your been sniffing that corperate glue for too long.
If you come to me thinking i am suitable for a job, you reach out via linked in, you see my public repos, then ask me to code for you on demand like a monkey?! Pull the other one!
(not referncing OP, general comment on interview processes)
https://goteleport.com/blog/coding-challenge/
We are also trying to be as transparent as possible with our challenges being open source:
https://github.com/gravitational/careers/tree/main/challenge...
and requirements being published here:
https://github.com/gravitational/careers/blob/main/levels.pd...
I am sorry to hear that you had bad experience. Our interview process is a trade-off and has one big downside - it may take more time and efforts compared to classic interviews. It could also feel disappointing if the team does not vote in favor of the candidate's application.
However, if there was something else wrong with your experience and you are willing to share, please send me an email to sasha@goteleport.com.