Hacker Newsnew | past | comments | ask | show | jobs | submit | beginning_end's commentslogin

Any advice on what to do if you might be a victim to this?


Apparently, it was collecting passwords from victim machines. So, step one would be to remove everything the script put onto your machine. Step two would be to change your passwords.


Step one is to unplug the machine from the internet. Step two is to use another machine to change all your passwords, starting with the “pivot” passwords - your password manager master password, your email accounts, your AppleID, your mobile provider - followed by financial accounts and then all others. While changing passwords, make sure to “invalidate all sessions” where possible.

Only after you’ve done all this should you move onto Step 3: reformat your computer and install the OS from scratch.


Step 3 should probably be reinstalling the OS, and restoring data from backup (ideally from before the malicious version was installed).


And install ublock origin going forward.


Is there any way to check if you're affected? I just happened to install Homebrew while the malicious site was up and now I'm not sure if I installed the legit version.


Check if /tmp/update exists. If it does, you’re infected.


I think the malware tries to delete this file. So, it won't be a reliable method to identify whether you were infected.


I'd isolate the machine from the internet, change my passwords from a trusted machine, save the media and documents from the isolated machine, and then reinstall the OS / factory reset the isolated machine. Still, I'm sure that there are technical possibilities that this doesn't account for, but I think I would be okay with the procedure nevertheless.


Bluesky has become really good over the last few months, there's really no reason to support X anymore.


You can't think of any differentiation, pros/cons, that may warrant supporting multiple different approaches?


I think it's pretty clear by now that the owner of X is using it as a political tool, so discussing technical details of the site seems pointless. I did like the new anti-toxicity features on Bluesky though:

https://www.theverge.com/2024/8/29/24231414/bluesky-anti-tox...


> I did like the new anti-toxicity features on Bluesky though: [The Verge article about a new Bluesky feature that allows cited accounts to remove themselves as a citation in other accounts' posts retroactively.]

But this feature only reduces the usefulness of the quote/citation feature.

In low trust quote scenarios, users would likely revert to sharing screenshots of the referenced post. This maintains control over the referenced content, but reduces authenticity since screenshots can be faked.

Maybe I'm yelling at the clouds here, but I think all content in the post should be controlled by the account making the post - much like the original Twitter RT convention. ...or HN.


Do you have any differentiation or pros/cons that may warrant supporting multiple different approaches that you care to share?


It's been really fun seeing the growth over the last year. Tons of people and fun/interesting posts. Feels like the early days of Twitter.


Is "decripto.org" really a reliable source?



This article reads like Elon became a victim of a filler bubble on his own platform without realizing it. nobody really sees their own filter bubble.


There's so much cool stuff these days, like interstellar iron: https://physicsworld.com/a/antarctic-snow-yields-interstella...


The disclaimer really should be much tougher: "Every LLM consistently makes mistakes. The mistakes will often look very plausible. NEVER TRUST ANY LLM OUTPUT."


> NEVER TRUST ANY LLM OUTPUT

that doesn't sound like a helpful attitude. everything you read might be wrong, llm or not - it's just a numbers game. with gpt3 i'll trust the output a certain amount. it's still useful for some tasks but not that many. gpt4 i'll trust the output more


LLMs are impressively good at confidently stating false information as fact though. They use niche terminology from a field, cite made-up sources and events, and speak to the layman as convincingly knowledgable on a subject as anyone else who's actually an expert.

People are trusting LLM output more than they should be. And search engines that people have historically used to find information are trying to replace results with LLM output. Most people don't know how LLMs work, or how their search engine is getting the information it's telling them. Many people won't be able to tell the difference between the scraped web snippets Google has shown for years versus a response from an LLM.

It's not even an occasional bug with LLMs, it's practically the rule. They don't know anything so they'll never say "I don't know" or give any indication of when something they say is trustworthy or not.


at least the llm (for now) doesn't have an agenda

the top result on google is literally just the result of how hard someone worked on their seo. they might not "hallucinate", but a company can certainly use strong seo skills to push whatever product/opinion best suits them.


But it’s correct. Without independent verification, you can never, ever trust anything that the magic robot tells you. Of course this may not matter so much for very low-stakes applications, but it is still the case.


Until this point in history there has been a reasonable correlation between the quality of writing and the "quality" of the underlying work in most text. If effort was put into the writing (and learning to write), that in itself used to be a decent indicator that effort and skill also was put into the data/ideas that the text conveyed.

That rule no longer works. People are still going to rely on it for a while though, and I'm worried it's going to break some stuff over the next few years.


I think probably the solution here is to stop using publishing in peer reviewed journals as a job requirement for academics. Even before ChatGPT, academic publishing was already falling victim to Goodhart's law. Remove the perverse incentives and it should cut down on this by quite a lot.


With any luck the endpoint will be a world where texts are evaluated more on their semantics than their syntax.


Hoping for Kessler syndrome :)


If I understand correctly, generally not from Starlink. Its orbit is too low; the satellites and their debris tend to end up pushed into the atmosphere eventually.


a guy can dream


I've been thinking the same thing. It'll be interesting to see if we end up with prompt-injecting ads


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: