Thanks for keeping that service up and running! I think it’s been about 10 years since I found it and it’s been my go-to reference ever since. I’ll often get laughs from people when I share it as reference.
I’ve been building my wife a budget tracking dashboard for reporting on PPC ad campaigns.
At any given time, she’s working with any number of clients (directly or subcontracted, solo or as part of a team) who each have multiple, simultaneous marketing campaigns across any number of channels (google/meta/yelp/etc), each of which is running with different parameters. She spends a good amount of time simply aggregating data in spreadsheets for herself and for her clients.
Surprisingly we haven’t been able to find an existing service that fits her needs, so here I am.
It’s been fun for me to branch out a bit with my technology selections, focusing more on learning new things I want to learn over what would otherwise be the most practical (within reason) or familiar.
In addition to not enjoying it, I also don’t learn anything, and I think that makes it difficult to sustain anything in the middle of the spectrum between “I won’t even look at the code; vibes only” and advanced autocomplete.
My experience has been that it’s difficult to mostly vibe with an agent, but still be an active participant in the codebase. That feels especially true when I’m using tools, frameworks, etc that I’m not already familiar with. The vibing part of the process simultaneously doesn’t provide me with any deeper understanding or experience to be able to help guide or troubleshoot. Same thing for maintaining existing skills.
It's like trying to learn math by reading vs by doing. If all you're doing is reading, it robs you of the depth of understanding you'd gain by solving things yourself. Going down wrong paths, backtracking, finally having that aha moment where things click, is the only way to truly understand something.
Now, for all the executives who are trying to force-feed their engineering team to use AI for everything, this is the result. Your engineering staff becomes equivalent to a mathematician who has never actually done a math problem, just read a bunch of books and trusted what was there. Or a math tutor for your kid who "teaches" by doing your kid's homework for them. When things break and the shit hits the fan, is that the engineering department you want to have?
I'm fairly certain that I lost a job opportunity because the manager interviewing me kept asking me variations of how I use AI when I code.
Unless I'm stuck while experimenting with a new language or finding something in a library's documentation, I don't use AI at all. I just don't feel the need for it in my primary skill set because I've been doing it so long that it would take me longer to get AI to an acceptable answer than doing it myself.
The idea seemed rather offensive to him, and I'm quite glad I didn't go to work there, or anywhere that using AI is an expectation rather than an option.
I definitely don't see a team that relies on it heavily having fun in the long run. Everyone has time for new features, but nobody wants to dedicate time to rewriting old ones that are an unholy mess of bad assumptions and poorly understood.
My company recently issued an "Use AI in your workflow or else" mandate and it has absolutely destroyed my motivation to work
Even though there are still private whispers of "just keep doing what you're doing no one is going to be fired for not using AI", just the existence of the top down mandate has made me want to give up and leave
My fear is that this is every company right now, and I'm basically no longer a fit for this industry at all
Edit: I'm a long way from retirement unfortunately so I'm really stuck. Not sure what my path forward is. Seems like a waste to turn away from my career that I have years of experience doing, but I struggle like crazy to use AI tools. I can't get into any kind of flow with them. I'm constantly frustrated by how aggressively they try to jump in front of my thought process. I feel like my job changed from "builder" to "reviewer" overnight and reviewing is one of the least enjoyable parts of the job for me
I remember an anecdote about Ian McKellen crying on a green screen set when filming the Hobbit, because Talking to a tennis ball on a stick wasn't what he loved about acting
I just don't understand your company and the company OP interviewed for. This is like mandating everyone use syntax highlighting or autocomplete, or sit in special type of chair or use a standing desk, and making their use a condition for being hired. Why are companies so insistent that their developers "use AI somehow" in their workflows?
I think it's way more basic. Much like recruiters calling me up and asking about 'kubernetes' they are just trying to get a handle on something they don't really understand. And right now all stickers point to 'AI' as the handle that people should pull on to get traction in software.
It is incredibly saddening to me that people do pattern matching and memorize vocabulary instead of trying to understand things even at a basic level so they can reason about it. But a big part of growing up was realizing that most people don't really understand or care to understand things.
The other side of me thinks that maybe the eventual landing point of all this is a merger of engineering and PM. A sizeable chunk of engineering work isn't really anything new. CRUD, jobs, events, caching, synchronization, optimizing for latency, cost, staleness, redundancy. Sometimes it amazes me that we're still building so many ad-hoc ways of doing the same things.
Like, say there's a catalog of 1000 of the most common enterprise (or embedded, or UI, or whatever) design patterns, and AI is good at taking your existing system, your new requirements, identifying the best couple design patterns that fit, give you a chart with the various tradeoffs, and once you select one, are able to add that pattern to your existing system, with the details that match your requirements.
Maybe that'd be cool? The system/AI would then be able to represent the full codebase as an integration of various patterns, and an engineer, or even a technical PM, could understand it without needing to dive into the codebase itself. And hopefully since everything is managed by a single AI, the patterns are fairly consistent across the entire system, and not an amalgamation of hundreds of different individuals' different opinions and ideals.
Another nice thing would be that huge migrations could be done mostly atomically. Currently, things like, say, adding support in your enterprise for, say, dynamic authorization policies takes years to get every team to update their service's code to handle the new authz policy in their domain, and so the authz team has to support the old way and the new way, and a way to sync between them, roughly forever. With AI, maybe all this could just be done in a single shot, or over the course of a week, with automated deployments, backfill, testing, and cleanup of the old system. And so the authz team doesn't have to deal with all the "bugging other teams" or anything else, and the other teams also don't have to deal with getting bugged or trying to fit the migration into their schedules. To them it's an opaque thing that just happened, no different from a library version update.
With that, there's fewer things in flight at any one time, so it allows engineers and PMs to focus on their one deliverable without worrying how it's affecting everyone else's schedules etc. Greater speed begets greater serializability begets better architecture begets greater speed.
So, IDK, maybe the end game of AI will make the job more interesting rather than less. We'll see.
The one place it really shines for me personally is bash scripts.
I've probably written 50 over the last two years for relatively routine stuff that I'd either not do (wasn't that important) or done via other means (schelpping through aws cli docs comes to mind) at 2x the time. I get little things done that I'd otherwise have put off. Same goes for IaC stuff for cloud resources. If I never have to write Terraform or Cloudformation again, I'd be fine with that.
Autocomplete is hit or miss for me--vscode is pretty good with CoPilot, Jetbrains IDEs are absolutely laughably bad with CoPilot (typically making obvious syntax errors on any completion for a function signature, constructor, etc) to the point that I disabled it.
I've no interest in any "agent" thingys for the time being. Just doesn't interest me, even if it's "far better than everyone" or whatever.
I will always have a soft spot in my heart for Django, the python web framework, even though I don’t use it anymore. https://www.djangoproject.com/
When I was still learning to code, I spent hours and hours and hours poking around the Django source code. In particular I was fascinated by the metaprogramming used in the Model and Query objects, and I ended up learning a ton about how Python works under the hood.
- function name doesn't match pep8
- name doesn't match its behavior
- docstring is trying to explain what it does instead of proper function name, see 2
- 60 line for cycle
- what does d mean in for d in object_list? perhaps d as an object/instance/item? Good luck remembering that when you reach end of this for 60 lines later
- using comments instead of functions
- Handle M2M relations could be replaced with handle_m2m_relations(...)
- Handle FK fields could be replaced with handle_fk_fields(...)
- and so on ..
- using/catching generic exceptions
- using isinstance instead of proper polymorphism
- **options
And I've seen way worse things inside django than this. Please don't recommend django. Please
https://github.com/django/django Django's codebase is a pleasure to read. The directory structure is intuitive to navigate, and test coverage is great.
Me but with Flask and its cohorts like Werkzeug. I always found the amount of security advisories with Django vs that to be something alarming. I might still use it once in a while for something like prototyping.
Python is easy to write but writing it "right", in a way that doesn't compromise performance, is a thing.
> Me but with Flask and its cohorts like Werkzeug. I always found the amount of security advisories with Django vs that to be something alarming.
At work, another team introduced automated CVE scanning to fulfill a contractual obligation to do so. When they asked me to implement this on my team's Django project, I said "well alright, as long as it doesn't constantly break the build because of some obscure false positive CVE".
Within a week, the CI job was broken because of 5 "CVE"s. 4 were false positives for our project and 1 was a configuration error by the other team.
Just to let you know to take "number of CVEs" with a large grain of salt.
You get CVE’s inside Django because it’s a large and widely used target, and because in Flask, FastAPI, etc. you end up implementing a lot more stuff yourself instead of using built in things. That doesn’t necessarily mean your code is more secure!
It ups your chances. I try and choose small, well-written libraries to work with in everything I do. Too many batteries included and they start to leak eventually.
Somewhat unrelated to the point you’re making, something seems off about the results in that Glassdoor link.
Searching “computer science” in the jobs tab treats that as a keyword and shows results for a variety of job titles related to computer science. The salaries tab, however, seems to indicate it’s only collecting information about job titles that have “computer science” in the title.
> The work has received funding from the National Geographic Society and the U.S. National Institutes of Health, as better understanding adaptations to high altitude life is “potentially relevant in treating a number of human diseases that relate to... problems with oxygen delivery and oxygen utilization,” he says.
I often wonder how discoveries like this ultimately manifest into actionable technology. Are they studying cell structure? DNA? What does that process look like?
> The results could also aid doctors in treating altitude sickness and coping with life at high altitude or elsewhere where there are low levels of oxygen.
That makes me wonder if the end goal is some kind of pharmaceutical drug.
I think the reality is that researchers justify themselves to grant writing organizations in whatever way they can but are often just doing science for it's own sake.
Anecdotally, I watched a talk about how the structure of the ribosome was solved, and the researcher mentioned that they justified themselves to grant writing organizations by saying it would help develop antibiotics - which did turn out to be the case, but they described themselves as feeling amused rather than vindicated.
Bingo. While understandable, I fear the need to justify nearly all research by potential foreseeable gain limits our ability to invest in truly fundamental science.
Yes, grant-style funding is really bad for basic research. For where larger institutes have the advantage: a few prestigious findings can bring the funding to do a lot of less glamorous stuff on the side.
Or trying to advance their careers, which isn't the same thing, as it's self-interest without regard for whether science is being meaningfully advanced.
It’s almost impossible to predict how any knowledge will be used in the future. What is certain is that if you look at almost anything that exists in modern society, its existence depends on thousands of knowledge morsels, few of which were created with such eventual application in mind. That’s why successful civilizations must invest in knowledge regardless of its apparent practical relevance.
I agree, but I try to stick to downgrading the comment/argument instead of the people themselves, because I’ve found it to be a pretty ubiquitous line of thought since I started noticing it.
I was listening to an episode of Joe Rogan, and the guest considered himself a conservative.
Discussing the differences between a conservative and a liberal he used an analogy of a fence, where a liberal would want to take it down and a conservative would want to leave it up.
> I would argue most relationships with coworkers aren't genuine. You might have a different metric for this, but mine is that when you change jobs, these relationships evaporate.
True, my relationship with my coworkers is scoped to my work environment, but I don't think that makes them not genuine. Rather, they are circumstantial, limited, and probably have a time limit on them. I don't value them any less for what they are. In fact, I embrace them for what they are: a meaningful aspect of the large chuck of my life that I spend working, even if they never exit outside of that realm.
Some of them do, like you say, slip through the cracks and do become friends outside of work.
As an analog, I spent about 5 years traveling and working remotely out of a backpack. I met A LOT of people, and I had many genuine and sometimes perspective-changing interactions with people. Most of them I don't keep in touch with anymore, but that doesn't devalue the time that I spent with them, even if we knew from the moment we met that we would only ever interact face-to-face for a few months.
Transactional and/or circumstantial and/or scoped to some time-bound aspect of your life != genuine
"Their" is defined as a "his or her" singular pronoun in most modern dictionaries, perhaps in part "to be PC" but probably much more because "his or her" is an incredibly clunky phrase to say.
Consider:
They asked their teacher if their homework was due on Friday.
vs.
He or she asked his or her teacher if his or her homework was due on Friday.
1-2% includes people who have extremely minor deviations from the norm and requires you to use the most strict biological definitions of 'male' or 'female' that we could possibly use. The vast majority of these 1-2% are still easily assigned a gender.
From your wikipedia article-
> If you ask experts at medical centers how often a child is born so noticeably atypical in terms of genitalia that a specialist in sex differentiation is called in, the number comes out to about 1 in 1500 to 1 in 2000 births [0.07–0.05%]. But a lot more people than that are born with subtler forms of sex anatomy variations, some of which won't show up until later in life.[132]
There are some percentage of people who aren't overwhelmingly male or female, and it's important to consider them with our speech, but I don't think 1-2% is a fair representation.
How courteous of you to assign gender definitions to people based on their chromosomes! Binary gender theory is the tail wagging the dog; it's backwards reasoning used to prop up outdated cultural beliefs.
Those people who need their chromosomes analyzed are part of the 0.07-0.05%, not the 1-2%
I imagine we would agree about the importance of respectful social interaction with those who don't fit within gender norms. I'm probably on your side here, but it's important to have these discussions using good facts.
reply