The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.
Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.
If you don’t care enough to write it, why should I care enough to read it?
I am really amazed at how we are really okay with LLMs writing code end to end (without human in the loop) / dark factory concept but when it comes articles, HN is suddenly against LLMs writing words. I do not see the difference between writing code and writing prose. Both have keywords, grammars, syntax, meaningful combinations (function or chaining in code / collocations in words). If we think that AI-generated words are not meaningful or easy to follow that same must apply to AI-generated code, which may be harder to read or understand since it is not written by human. Let's stop being hypocrites.
Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.
Who is the 'we' here? When did I become ok with LLMs writing code end to end or against LLMs being used to assist writers? I wasn't aware I held either of these positions.
That's the difference to me. Code is used as instructions to computers. Written human language is used to to communicate thoughts, ideas, feelings to other humans.
I disagree with the premise that "we" are all OK with AI slop computer code however. Even if it's just for consumption by machines, for at least some developers it is a creative outlet.
The purpose of writing is to get your thoughts across in words. A prompt sufficient enough to get out an article with zero chance of it adding things you don't mean has to contain as much information as the article itself would. Just write the article.
Since I cannot edit my comment, I replied my comment. I did not mean to insult HN moderators. I am actually very happy that they are protecting HN by removing and flagging AI content. I only wanted to attract attention to the topic that for some areas AI is promoted but then for some areas AI is demoted and I do not get it.
What I mean with "we" is that there is a general perception that using AI is okay and mandatory. This idea is becoming more and more prevalent in management positions and it disturbs me deeply.
I got some replies since I commented, but I am still in the same mind. I did not see a strong refutation to my idea. Why are some people (I didn't want to use the word "we" again) are okay with AI use in code but not in prose? I know that they are not exactly same but they have some similarities. If we are unhappy with sloppy prose, why are we happy with sloppy, potentially buggy or hard to maintain code?
We are not okay with slop code. There was healthy and widespread dissent in 2024 and beginning of 2025. Ycombinator cracked down on the dissent first by installing another moderator and then by downranking and banning anti-AI people.
What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.
What hypocrisy is there in distinguishing between the qualitative value of prose vs code? They serve entirely different purposes; your failure to recognize that is no one else's fault.
It is full of these short sentences that AI writing loves, sort of to feel "punchy". Normally you would copy-edit that stuff, join them up, have the writing have some rhythm. I agree with GP, the article is hard to read because it seems to have a lot of https://tropes.fyi/
Try for too much impact, and you end up browbeating the reader until they're little more than metaphorical pulp. A human writer might like using those types of sentences--or any of the obvious LLM writing tropes--in specific contexts, but they'll usually recognize the need to avoid overusing them.
LLMs don't, and so the tropes get repeated ad nauseam. It doesn't help that social media posts are a huge part of their training data, and there's a large body of research on how Twitter and social media in general have altered grammar and sentence construction towards patterns more commonly found in oral-based traditions as users sought out ways to make their voices heard.
It's easy to imagine a more polished version of a line like "It's not X. It's Y!" being tossed out during a speech precisely because it can be dramatic and punchy. When it's done in every other paragraph, however, it can become rather disconcerting.
Twitter is full of strung together short punchy sentences, and it spread to articles long before AI.
Not discounting the possibility that it's AI, but it didn't have the same repetition, contradiction, and inaccuracies I notice in other AI content. Though even that isn't exclusive to AI.
The advantage of Bored Ape NFTs is that one could very quickly visually identify one and block or scroll past without much thought. On the other hand, reading AI slop takes a few extra cycles to parse out and categorise appropriately, with the occasional false evaluations and second guesses. It will be a fine day, indeed, when this pattern of writing fucks off forever.
Is it really so obvious? It didn’t seem AI-written to me.
Every day I seem to encounter (and skip over in disgust) a dozen or so AI-generated articles at the top of web searches, but this wasn’t anything at all like those.
I wouldn’t go that far. It was pretty clear a long time ago that humans spending so much time filling the internet with content was going to eventually enable neural networks to pretend to communicate.
The advancements required to arrive at modern LLMs and the tech needed to get humans safely to Mars or live safely on the Moon are orders of magnitude in difference.
When I was first starting out as a professional developer 25 years ago doing web development, I had a friend who had retired from NASA and had worked on Apollo.
I asked him “how did you deal with bugs”? He chuckled and said “we didn’t have them”.
The average modern AI-prompting, React-using web developer could not fathom making software that killed people if it failed. We’ve normalized things not working well.
there's a different level of 'good-enough' in each industry and that's normal. When your highest damage of a bad site is reduced revenue (or even just missed free user), you have lower motivation to do it right compared to a living human coming back in one piece.
Yes, of course, but a culture of “good enough” can go too far. One may work in a lower-risk context, but we can still learn a lot from robust architectural thinking. Edge cases, security, and more.
Low quality for a shopping cart feels fine until someone steals all the credit card numbers.
Likewise, perfectionism when it is unneeded can slow teams down to a halt for no reason. The balance in most cases is in the middle, and should shift towards 100% correctness as consequences get more dire.
This is not to say your code should be a buggy mess, but 98% bug free when you're a SaaS product and pushing features is certainly better than 100% bug free and losing ground to competitors.
True, though I'd say more bug impact than bug free-ness. If the 2% of bugs is in the most critical area of your app and causes users to abandon your product then you're losing ground.
That's one thing I think is good to learn from mission critical architecture: an awareness of the impact and risk tolerance of code and bugs, which means an awareness of how the software will be used and in what context by users.
as someone who uses deepseek, glm and kimi models exclusively, an llm telling me what to do is just off the wall
glm and kimi in particular, they can't stop writing... seriously very eager to please. always finishing with fireworks emoji and saying how pleased it is with the test working.
i have to say to write less documentation and simplify their code.
LLMs are next token predictors. Outputting tokens is what they do, and the natural steady-state for them is an infinite loop of endlessly generated tokens.
You need to train them on a special "stop token" to get them to act more human. (Whether explicitly in post-training or with system prompt hacks.)
This isn't a general solution to the problem and likely there will never be one.
1986. Shows how little has truly changed. I think this quote from that paper is one of the most apt for the LLM age: "The hardest single part of building a software system is deciding precisely what to build."
In the mid-2010s I was working with my biz dev guy on a product concept that was in the low-code/no-code space with a heavy dose of pre-transformer AI and that paper was a north star, as it is for any serious thinking about low/no.
You can find approaches that improve things, but there's always going to be a chance that your code is terrible if you let an LLM generate it and don't review it with human eyes.
But review fatigue and resulting apathy is real. Devs should instead be informed if incorrect code for whatever feature or process they are working on would be high-risk to the business. Lower-risk processes can be LLM-reviewed and merged. Higher risk must be human-reviewed.
If the business you're supporting can't tolerate much incorrectness (at least until discovered), than guess what - you aren't going to get much speed increase from LLMs. I've written about and given conference talks on this over the past year. Teams can improve this problem at the requirements level: https://tonyalicea.dev/blog/entropy-tolerance-ai/
React compiler adds useMemo everywhere, even to your returned templates. It makes useMemo the most common hook in your codebase, and thus very necessary. Just not as necessary to write manually.
The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.
Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.
If you don’t care enough to write it, why should I care enough to read it?
reply