At work I mostly use claude code and chatgpt web for general queries, but cursor is probably the most popular in our company. I don’t think we are "cooked" but it definitely changes how development will be done.
I think the process of coming up with solutions will still be there but implementation is much faster now.
My observations:
1. What works for me is the usual, work iteratively on a plan then implement and review. The more constraints I put into the plan the better.
2. The biggest problem for me is LLM assuming something wrong and then having to steer it back or redoing the plan.
3. Exploring and onboarding to new codebases is much faster.
4. I don’t see the 10x speedup but I do see that now I can discard and prototype ideas quickly. For example I don’t spend 20-30 minutes writing something just to revert it if I don’t like how it looks or works.
5. Mental exhaustion when working on multiple different projects/agent sessions is real, so I tend to only have one. Having to constantly switch mental model of a problem is much more draining than the “old” way of working on a single problem. Basically the more I give in into vibing the harder it is to review and understand.
I find Leetcode discussions strange. I recently changed jobs so I was preparing by doing some Leetcode problems. The actual problems I was asked during the interview process were very simple compared to what I expected. Interviewers even explicitly stated that they wanted to see how I think, and the solution didn't even have to compile. So in my experience the whole idea that «Leetcode is harmful» is overblown a little bit.
To me the value of coding interviews are knowing enough algorithms/data structures to apply and adapt them to problems, being able to quickly sketch out your idea, and explain trade-ffs.
If I used Leetcode to interview someone I wouldn't even ask them to code a solution.
I'd show them a few problems that I thoroughly understand, ask them to pick two or three of them, and ask them for their thoughts on how to approach them. It would just be a discussion with no need to touch a computer. A whiteboard and scratch paper would be available if they wanted to use them.
I'd provide enough input to make sure the discussion moves along to figuring out a solution, but would try to let them take the lead only providing enough myself to make sure we make progress.
If I ended up providing most of the solution that would be fine. The most important part of the interview comes later.
Then what we'd do is look at the official answer and talk about it, seeing where it matches what we expected and where it does things differently, and maybe talk about whether those differences make it better than what we came up with.
Finally, and this is the most important part, we'd go to the Leetcode discussion for the problem.
In the discussion people post all kinds of solutions and those have all kinds of errors. Some are just flat out wrong. Some are mostly right but miss common edge cases (edge cases I would have made sure the interviewee and I talked about earlier). Some miss somewhat ridiculous edge cases such as failing if an input array is so large that all positive integers in the language they are using are valid array indexes. Some always give the right answer but don't meet the specified time or space constraints.
We'd do code reviews on some of those, again with me trying to just provide enough input to keep things moving along.
But I find the whole discussion not strange but rather dishonest. I realize I'm biased towards thinking that the blog posts condemning some special sort of interviewing come from the "losers", the people that didn't get the job. That isn't true in general, but there is still this aura of entitlement: "I am a good programmer! I have other skills than leetcode! I demand an interview process that is tailored to my specific skills!"
I don't believe the claims that the current style of interviews is not working. This claim usually is made in the blog posts and in the interview horror stories, but I haven't seen it any of the FAANGs I worked in. Sure, some questions give more signal than others, but that's the company's problem, not the candidate's. If anything, large companies realize that lack of diversity is a larger problem and are trying to combat that directly (not with changing the hiring process for the general case though).
What it feels like is the blog posts try to solve a different problem ("how can I get a FAANG job") under the disguise of making the world a better place. Meanwhile, companies try to solve a completely different problem altogether ("how do we weed out candidates that might not be a good fit for what we value at scale with low latency?").
As a hiring manager, I'm interested in applicants that are (among others) good at leetcode-style questions without having had to train a lot on leetcode: Technical problems that are not easy to understand (cognitive and communication abilities), have a simple and easy trivial solution (higher level view on problems and customer orientation) and have high technical depth at scale (specific technical knowledge). So that's where the post is right: don't spend so much time on leetcode. But also don't try to change how companies interview. It's unbecoming.
Lol almost all Leetcode is memorized, most of the algorithms developed tool years for researchers to understand, someone isn’t going to figure it out in 45 minutes without having seen it before
I guess it is strange to me, as you usually only hear about bad stories. So I was expecting it to be much worse.
I agree, as long as companies evaluate candidates by a combination of different tests, this should be the best for everybody.
Two companies for which I went through the whole interview process had 5-7 different interviews, and only one of them was leetcode-style problem. So even if somebody fails at leetcode problem they should excel at something else.
Depends on the company. FAANG and unicorns will absolutely ask for it to compile (except Google which asks questions on Google Docs of all things), and they want the optimal solution almost right away and will penalize you for not getting such a solution.
I just had this unfortunate experience with Google. The interviewer presented some badly-defined vague problem statement, and started asking probing questions about implementation details right away. I'm trying to ask exactly what the problem is, and he's there asking me for the runtime complexity of the solution I'm proposing. Was this dude entirely checked out? It felt like I was talking to a wall.
> Interviewers even explicitly stated that they wanted to see how I think
That's a common phrase, but it's hardly ever true. You'll actually need to almost perfectly regurgitate the memorized solution, stumbling only enough to give a convincing theatrical impression that you're developing this algorithm on the spot instead of having memorized it (even though the original development of the algorithm probably took years of research).
I think it depends on your usecase. Check if you will actually benefit from pro/max. I'm willing to bet regular M1 is fine for the majority of people.
I got M1 air 16gb around 3 month ago and it works great, although it's not my only laptop. I was choosing between air M1 16 gb and 14" M1 pro 32 gb, and the price difference for additional ram, extra ports and screen size didn't make sense to me.
I build myself a home server last year, without any particular plan just to try it. Specs: Ryzen 7 3800X, 32 GB ECC RAM, 2 TB HDD and 500 GB SSD. I use it to run PLEX, Torrent, TrueNAS, WireGuard, NextCloud, and my side projects.
The server runs Proxmox, some of these services are separate VMs, others are running as Nomad jobs. I also have a dedicated development VM which I can access remotely through the VPN wich is pretty nice.
For my usage so far yes, dev VM has around 60 GB, others around 30-40 wich is plenty to install everything. For Nextcloud I mount NFS volumes from TrueNAS which has 2 TB HDD.
I used to work on similar project around 4 years ago, It was a Java Swing application that received all the data about the layout from the backend. The system was around 14 years or so, they implemented themselves most of the things like IoC container, custom XML markup for constructing UI, custom Swing elements.
It worked surprisingly well. The thing that impressed me the most was the amount of documentation which even included tutorials on how to make new UI elements, Screens, etc.
I got myself Thinkpad X1 Extreme last year with 16 GB RAM, 256 GB NVME SSD, 8th gen i7 running Linux, so far I'm very satisfied. It can be upgraded fairly easily, I already installed additional 1 TB SSD, and will probably upgrade RAM down the road.
On Linux apart from switching between dedicated Nvidia GPU and integrated, everything worked out of the box for me.
My observations:
1. What works for me is the usual, work iteratively on a plan then implement and review. The more constraints I put into the plan the better.
2. The biggest problem for me is LLM assuming something wrong and then having to steer it back or redoing the plan.
3. Exploring and onboarding to new codebases is much faster.
4. I don’t see the 10x speedup but I do see that now I can discard and prototype ideas quickly. For example I don’t spend 20-30 minutes writing something just to revert it if I don’t like how it looks or works.
5. Mental exhaustion when working on multiple different projects/agent sessions is real, so I tend to only have one. Having to constantly switch mental model of a problem is much more draining than the “old” way of working on a single problem. Basically the more I give in into vibing the harder it is to review and understand.