Hacker Newsnew | past | comments | ask | show | jobs | submit | zer00eyz's commentslogin

Letter writing (with faster delivery), Book printing, Radio, Television, Music Distribution (records and tapes) --

This is a list of items that when new, were going to be the downfall of society. Im sure a few of you are old enough to remember the satanic panic of the 80s and the PMRC in the 90's.

None of these turned out the way people thought. There is nothing new under the sun, and this response looks very much hyperbolic in the face of manipulated data and "feelings" over "facts".

That isn't to say that there aren't things wrong with Facebook, or social media, but this keeps getting attention when it is no where near the top of the list.


The house is poorly put together cause the carpenter used a cheap nail gun and a crappy saw.

LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect

If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.

Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.


It is hard to test LLM legal/medical advice without risk of harm, but it is often exceedingly easy to test LLM generated code. The most aggravating thing to me is that people just don't. I think the best thing we can do is to encourage everyone who uses/trusts LLMs to test and verify more often.

Bing bing bing.

Most of the safety people on the AI side seem to have some very hyperbolic concerns and little understanding of how the world works. They are worried about scenarios like HAL and the Terminator, and the reality is that if linesmen stopped showing up to work for a week across the nation there is no more power. That an individual with a high powered rifle can shut down the the grid in an area with ease.

As for the other concerns they had... well we already have those social issues, and are good at arguing about the solutions and not making progress on them. What sort of god complex does one have to have to think that "AI" will solve any of it? The whole thing is shades of the last hype cycle when everything was going to go on the block chain (medical records, no thanks).


Do I feel bad for the above person.

I do. Deeply.

But having lived through the 80's and 90's, the satanic panic I gotta say this is dangerous ground to tread. If this was a forum user, rather than a LLM, who had done all the same things, and not reached out, it would have been a tragedy but the story would just have been one among many.

The only reason we're talking about this is because anything related to AI gets eyeballs right now. And our youth suicides epidemic outweighs other issues that get lots more attention and money at the moment.


> maybe the scale of spam is enough to justify it.

This is 100 percent the case, and why these things are this way.

If you wanted to make email two point oh, I dont think it would look a lot like what we have today.


> This is 100 percent the case, and why these things are this way.

But gmail accepts emails without message-id on personal mailboxes apparently.


I think a mail 2.0 would be notify and pull based.... you notify a recipient's mail server that there's a message from <address> for them, then that server connects to the MX of record for the domain of <address> and retrieves <message-id> message.

Would this make mass emails and spam harder, absolutely. Would it be a huge burden for actual communications with people, not so much. From there actual white/black listing processes would work all that much better.


Is the idea that you could decide from the envelope whether you want to even bother fetching the message? Besides that I'm not sure I see the advantage

You have to have a working mail server attached to a domain to be able to send mail... that's the big part. Right now, email can more or less come to anywhere from anywhere as anyone. There are extensions for signing connections, tls, etc... but in general SMTP at it's core is pretty open and there have been efforts to close this.

It would simply close the loop and push the burden of the messages onto the sender's system mostly.

And yes, you can decide from the envelope, and a higher chance of envelope validity.


Like it proves you have the ability to receive mail at the domain you're sending from? I feel like SPF/DKIM already does this


jmap is the communication between a mail client and shared directory/mail services on a server. It does not include server to server communications (that I am aware of) for sending mail to other users/servers.

Couldn’t resist replying to:

> If you wanted to make email two point oh, I dont think it would look a lot like what we have today.


I use Gitea already... I haven't seen Forejo before today. Im now curious if it is worth the switch.

Forejo was originally forked from Gitea

The dot com bubble. The reboot of tech after (pre 2008) at the dawn of podcasting, Web 2.0, the "open web".

70 hour weeks weren't unheard of. Why... because the money was stupid and you had skin in the game.

Lots of people got wealthy, very wealthy. Fuck you money wealthy.

I know a lot of people who did that and then kept working. The large majority of them in fact.

If you're here and you're looking at one of these jobs, this is the critical sentence you need to ask when negotiating: "Can I see a cap table." If they say anything other than yes, then your response is "with out a cap table the value of the equity being offered is ZERO, I'm going to need a lot more cash".


> My pet peeve with AI is that it just accelerates whatever has already been automated or can be automated easily ....

> I’m just a frustrated old man I guess.

I think this is a great summary of the failure of vision that a lot of tech people are having right now.

> automate anything that already has an endpoint or whatever

Facebook used to have API's, Reddit used to have API's, amazon used to have API's

They are gone.

Enshitification and dark patterns have taken over.

"Hey open claw, cancel service xxx" where XXX is something that is 17 steps and purposely hard to cancel so they keep your money.

What's going to happen when your AI tool can go to a website and strip the ad's off and return you just the text? What happens when it can build a customized news feed that looks less like Facebook and more like HN? Aren't we just gaining back function we lost with the death of RSS?

Consumers are mad about the hype of AI but the moment that it can cut through the bullshit we keep putting in their way it's going to wreck business MODELS, and the choice will be adapt or die. Start asking your "AI" tools to do all the basic, tedious bullshit tasks that are low risk (you have a ton of them) and if it gets 1/4 of them done your going to free up a ton of your own time.


Well, that is a vision I can get behind. Maybe it will not just ruin white collar livelihoods, but also that of the rent seekers. Silver lining.

Dead wrong.

Because the world is still filled with problems that would once have been on the wrong side of the is it worth your time matrix ( https://xkcd.com/1205/ )

There are all sorts of things that I, personally, should have automated long ago that I threw at claud to do for me. What was the cost to me? Prompt and a code review.

Meanwhile, on larger tasks an LLM deeply integrated into my IDE has been a boon. Having an internal debate on how to solve a problem, try both, write a test, prove out what is going to be better. Pair program, function by function with your LLM, treat it like a jr dev who can type faster than you if you give it clear instructions. I think you will be shocked at how quickly you can massively scale up your productivity.


Yup, I've already run like 6 of my personal projects including 1 for my wife that I had lost interest in. For a few dollars, these are now actually running and being used by my family. These tools are a great enabler for people like me. lol

I used to complain when my friends and family gave me ideas for something they wanted or needed help with because I was just too tired to do it after a day's work. Now I can sit next to them and we can pair program an entire idea in an evening.


If it is 20% slower for you to write with AI, but you are not stressed out and enjoy it so you actually code then the AI is a win and you are more productive with it.

I think that's what is missing from the conversation. It doesn't make developers faster, nor better, but it can automate what some devs detest and feel burned out having to write and for those devs it is a big win.

If you can productively code 40 hours a week with AI and only 30 hours a week without AI then the AI doesn't have to be as good, just close to as good.


I'm in agreeance with you 100%. A lot of my job is coming into projects that have been running already and having to understand how the code was written, the patterns, and everything else. Generating a project with an LLM feels like doing the same thing. It's not going to be a perfect code base but it's enough.

Last night I was working on trying to find a correlation between some malicious users we had found and information we could glean from our internet traffic and I was able to crunch a ton of data automatically without having to do it myself. I had a hunch but it made it verifiable and then I was able to use the queries it had used to verify myself. Saved me probably 4 or 5 hours and I was able to wash the dishes.


The matrix framing is a very nice and way to put it. This morning I asked my assistant to code up a nice debugger for a particular flow in my application. It’s much better than I would have had time/patience to build myself for a nice-to-have.

I sort of have a different view of that time matrix. If AI is only able to help me do tasks that are of low value, where I previously wouldn’t have bothered—- is it really saving me anything? Before where I’d simply ignore auxiliary tasks, and focus on what matters, I’m now constantly detoured with them thinking “it’ll only take ten minutes.”

I also primarily write Elixir, and I have found most Agents are only capable of writing small pieces well. More complicated asks tend to produce unnecessarily complicated solutions, ones that may “work,” on the surface, but don’t hold up in practice. I’ve seen a large increase in small bugs with more AI coding assistance.

When I write code, I want to write it and forget about it. As a result, I’ve written a LOT of code which has gone on to work for years without touching it. The amount of time I spent writing it is inconsequential in every sense. I personally have not found AI capable of producing code like that (yet, as all things, that could change).

Does AI help with some stuff? Sure. I always forget common patterns in Terraform because I don’t often have to use it. Writing some initial resources and asking it to “make it normal,” is helpful. That does save time. Asking it to write a gen server correctly, is an act of self-harm because it fundamentally does not understand concurrency in Erlang/BEAM/OTP. It very much looks like it does, but it 100% does not.

tldr; I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.


> are only capable of writing small pieces well.

It excels at this, and if you have it deeply integrated into your workflow and IDE/dev env the loop should feel more like pair programing, like tennis, than it should feel like its doing everything for you.

> I also primarily write Elixir,

I would also venture that it has less to do with the language (it is a factor) and more to do with what you are working on. Domain will matter in terms of sample size (code) and understanding (language to support). There could be 1000s of examples in its training data of what you want, but if no one wrote a commment that accurately describes what that does...

> I think the ease of use of AI can cause us to over produce and as a result we miss the forest for the trees.

This is spot on. I stopped thinking of it as "AI" and started thinking of it as "power tools". Useful, and like a power tool you should be cautious because there is danger there... It isnt smart, it's not doing anything that isnt in its training data, but there is a lot there, everything, and it can do some basic synthesis.


If you want to build a house you still need plans. Would you rather cut boards by hand or have a power saw. Would you rather pound nails, pilot hole with a bit and brace and put in flat head screws... or would you want a nail gun and an impact driver.

And you still need plans.

Can you write a plan for a sturdy house, verify that it meets the plan that your nails went all the way in and in the right places?

You sure can.

Your product person, your directors, your clients might be able to do the same thing, it might look like a house but its a fire hazard, or in the case of most LLM generated code a security one.

The problem is that we moved to scrum and agile, where your requirements are pantomime and postit notes if your lucky, interpretive dance if you arent. Your job is figuring out how to turn that into something... and a big part of what YOU as an engineer do is tell other people "no thats dumb" without hurting their feelings.

IF AI coding is going to be successful then some things need to change: Requirements need to make a come back. GOOD UI needs to make a comeback (your dark pattern around cancelation, is now going to be at odds with an agent). Your hide the content behind a login or a pay wall wont work any more because again, end users have access too... the open web is back and by force. If a person can get in, we have code that can get in now.

There is a LOT of work that needs to get done, more than ever, stop looking back and start looking forward, because once you get past the hate and the hype there is a ton of potential to right some of the ill's of the last 20 years of tech.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: