Hacker Newsnew | past | comments | ask | show | jobs | submit | scratchyone's commentslogin

unfortunately the page can also lie to you haha. it seems people have reviewed the code by now, but running suspicious shellcode you don't fully understand is never a great idea.

I personally had AI review the code, add comments, disassemble the shell code, etc.

that's quite smart. i was almost stupid enough to paste it into a terminal to check if it worked before deciding to wait and let others analyze it first haha

my company intentionally avoids all AI usage and yet our enterprise google workspace is still constantly sending us ai generated summaries of all of our various sensitive documents

tbf most companies don't have a potentially world-ending product. only real similar field is defense contractors who typically can't brag about unreleased ideas as they're classified.

I agree, but the experts the author cites do not. Professor Valor believes that AI is a mirror and any existential fears of it are just reflected fear of ourselves; Professor Bender believes that AI is a con and all the people who say it's powerful enough to be world-ending are lying. Anyone who concedes the premise that AI has a genuine potential to be world-ending is, I think, on the AI labs' side of this debate.

We are not such a bad thing to fear.

This technology interacts socially, so even if it can't jailbreak itself on a technology-level (which feels like a tough guarantee to make at this point) it can simply ask someone to do a bad thing and there is some chance they'll do it. The same way a human leader does.

The first kids who have only faint memories of a time before chatbots will be entering the military in 6-7 years. You have to assume they are acting as best friends, therapists, or even surrogate parents for a substantial number of kids right now.

We are going to need years to figure out what to do about this technology. I think some impetus to get that process started is a good idea.


I mean, it's the BBC and the article doesn't have any typical AI tells, where is this idea that it's AI written coming from???

I'm assuming it's possible that an AI can deduct the potential 3rd degree result from an article like this on the BBC. Not talking about the wording

>sees any random article headline, automatically assumes it's AI.

Pretty reasonable assumption these days unfortunately.

Just saying things with no evidence is not reasonable

Roko's basilisk is a very tedious idea.

Honestly we should have learned this claim from AI companies was purely fear-mongering back when GPT-2 was "too dangerous to release".

Given that his reason for saying GPT-2 was too dangerous to release was that the world needed more time to prepare for the effects of this technology, and given that the following models were basically scaled-up versions of it and killed social media, news reporting and other kinds of communication, I'd say he was right about the dangers of it.

funny how he didn't care about ethics the moment it was more profitable to release it than to talk about dangers.

That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

To me, the more interesting divergence in discussion is on its capabilities.

AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.

Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.


100% agreed. That's part of the issue imo, these companies pretend their new models are "too dangerous" to seem like they care about the world, yet they have no qualms deploying existing models in warfare or bragging about impending mass-unemployment.

> That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).

And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.

Warfare, policing, bio-engineered viruses are theoretical and far down the list.


Not to mention that "automatic target detection" was primarily enabled by the ~2016-2020 AI hype/boom around image recognition, not the 2022-current hype/boom around LLMs

Its already being used in warfare though.

> Its already being used in warfare though.

What I mean is theoretical to the common person. They don't have killbot drones hunting them down, and are unlikely to have that experience anytime soon.

But most people have jobs, most people would be hard-hit if they lost theirs, lots of people lose theirs, and our elites are just itching to make that happen.

I'm certainly most worried about AI: my employer started an ongoing silent layoff campaign about the same time they started enforcing AI usage. I don't think those are unconnected.


I am to be honest not sure what I am more scared.

AI shaping warfare Vs. Using AI to justify outrageous warfare


We sadly don't need AI to justify outrageous warfare. You just need to remember when the US invaded Iraq over WMDs, including a full investigation into the WMDs that never found any. We then invaded anyway, to the detriment of everyone except defense contractors.

Don't worry, these companies will make sure we get to experience both nightmare futures.

that’s not a war crime, that’s boundary setting, and honestly, that’s rare

would you like me to list the applicable sections of the Geneva convention?


AI has been used in defense for a while now, a modern tomahawk cruise missile and its associated targeting systems is a good example. I think most people fear AI taking their job and only source of income.

Linear regression?

These were all already very valid concerns long before this era of "AI" or computational power.

The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.

None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.


AI does enable new capabilities when it comes to constant mass surveillance, and automated weaponry.

No it doesn't. We already have all of that right now and have had it for decades.

The big investment into Project Stargate is all about managing risk. The government contractor and security clearance situation is out of control. As well, every human mistake is costly and time consuming to address. If you instead blame it on AI, you can skip the court proceedings and postmortems.

The other part of this is likely an attempt to surface information with summaries and shorten the chain of command. This is just a power grab and a dangerous dismissal of necessary implementation detail. It's a tantrum being thrown by ignorant people at the top being displaced. We live in an ever-complicated world that demands more experienced leadership than we have available. AI is their hail mary pass.

LLMs are being abused as a political battering ram. They are not the technological breakthrough advertised. The AI label is borderline absurd. AGI even moreso. NLP is an accessibility tool at best.


It seems like they were correct, to me.

Yes, I love how everyone uses this argument, when what they were saying was among the lines of "GPT-2 would make it too easy to generate spam, deepfakes, content to manipulate opinion..." (not the actual quote but something like that). Turns out it was completely correct if you look at the state of the internet right now.

Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.


I was going to say.. I think people in general have this weird understanding of the word dangerous. Just because something is not movie level dramatic and/or does not generate over the top violence does not automatically make it less dangerous. In a sense, just the fact that is benign on the surface and allowed to embed in our day to day life is what makes the upcoming rug pull so painful.

And I am saying this as a person who actually likes this tech.


In all honesty, the people who want to turn off AI won't be downloading Warp in the first place. I know Warp has interesting terminal innovations because I've been familiar with it since before the AI boom, but new users can't really tell.

Homepage header is "Warp is the agentic development environment", only screenshot on the homepage shows what appears to be a product similar to cursor/antigravity/etc AI IDE. Fair if that's your product direction but there's nothing there that tells new users about your terminal UX improvements. Honestly even if I was in the market for a new AI tool, there's nothing on your website that really tells me why I should pick Warp over any of the many competitors.

Fwiw I think Warp is quite cool, I just mean this as hopefully useful feedback from a new customer perspective.


I don’t use Warps AI features. It’s pretty easy to avoid them.

I agree, it seems like disabling them is quite easy. I'm more speaking to the new user perspective. From looking at their site I would have no clue that that product even had features aside from AI. I would never have a chance to find out I can disable the AI features.

Exactly, it's not "AI bad", it's more like "Feature Factory" pumping hard makes this overwhelm to start

also

> A self-contained security audit prompt is available at docs/security-audit.md.

lmfao


is GLM genuinely comparable to claude models? haven't had a chance to test it yet.


I’m not going to pretend it’s on par, it’s clearly a step behind when it comes to thinking and planning.

However, it’s a very good “doer” - if I give it a list of well-scoped tasks, I can usually rely on it to execute them all correctly.


does google actually host anthropic models themselves?? surprised anthropic allows that, given how notoriously crazy they are about distillation or weight leaks or any hints of their models being used in the wrong way.


Yes, we host it ourselves, acting as the data processor which can be important for enterprise customers.

From developer experience, hosting them ourselves allows us to take advantage of our unique infra and deliver fastest time to first tokens of the providers.


maybe, but the response to GPU shortages being increased error rates is the concern imo. they could implement queuing or delayed response times. it's been long enough that they've had plenty of time to implement things like this, at least on their web-ui where they have full control. instead it still just errors with no further information.


I've been experiencing a good amount of delays (says it's taking extra time to really think, etc), and I'm using during off-peak time.


i notice that as well. most of the time when i see those it has a retry counter also and i can see it trying and failing multiple requests haha. almost never succeeds in producing a response when i see those though, eventually just errors out completely.


Coding is a problem solved. Claud writes the code. I edit it. I code around it.

Engineer roles dead in 6 months.


> I edit it. I code around it.

You're never gonna guess what software engineers do.


Because of the context I would think this is sarcasm, but I am not sure.


It is.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: