Apparently, it was collecting passwords from victim machines. So, step one would be to remove everything the script put onto your machine. Step two would be to change your passwords.
Step one is to unplug the machine from the internet. Step two is to use another machine to change all your passwords, starting with the “pivot” passwords - your password manager master password, your email accounts, your AppleID, your mobile provider - followed by financial accounts and then all others. While changing passwords, make sure to “invalidate all sessions” where possible.
Only after you’ve done all this should you move onto Step 3: reformat your computer and install the OS from scratch.
Is there any way to check if you're affected? I just happened to install Homebrew while the malicious site was up and now I'm not sure if I installed the legit version.
I'd isolate the machine from the internet, change my passwords from a trusted machine, save the media and documents from the isolated machine, and then reinstall the OS / factory reset the isolated machine. Still, I'm sure that there are technical possibilities that this doesn't account for, but I think I would be okay with the procedure nevertheless.
I think it's pretty clear by now that the owner of X is using it as a political tool, so discussing technical details of the site seems pointless. I did like the new anti-toxicity features on Bluesky though:
> I did like the new anti-toxicity features on Bluesky though: [The Verge article about a new Bluesky feature that allows cited accounts to remove themselves as a citation in other accounts' posts retroactively.]
But this feature only reduces the usefulness of the quote/citation feature.
In low trust quote scenarios, users would likely revert to sharing screenshots of the referenced post. This maintains control over the referenced content, but reduces authenticity since screenshots can be faked.
Maybe I'm yelling at the clouds here, but I think all content in the post should be controlled by the account making the post - much like the original Twitter RT convention. ...or HN.
The disclaimer really should be much tougher: "Every LLM consistently makes mistakes. The mistakes will often look very plausible. NEVER TRUST ANY LLM OUTPUT."
that doesn't sound like a helpful attitude. everything you read might be wrong, llm or not - it's just a numbers game. with gpt3 i'll trust the output a certain amount. it's still useful for some tasks but not that many. gpt4 i'll trust the output more
LLMs are impressively good at confidently stating false information as fact though. They use niche terminology from a field, cite made-up sources and events, and speak to the layman as convincingly knowledgable on a subject as anyone else who's actually an expert.
People are trusting LLM output more than they should be. And search engines that people have historically used to find information are trying to replace results with LLM output. Most people don't know how LLMs work, or how their search engine is getting the information it's telling them. Many people won't be able to tell the difference between the scraped web snippets Google has shown for years versus a response from an LLM.
It's not even an occasional bug with LLMs, it's practically the rule. They don't know anything so they'll never say "I don't know" or give any indication of when something they say is trustworthy or not.
the top result on google is literally just the result of how hard someone worked on their seo. they might not "hallucinate", but a company can certainly use strong seo skills to push whatever product/opinion best suits them.
But it’s correct. Without independent verification, you can never, ever trust anything that the magic robot tells you. Of course this may not matter so much for very low-stakes applications, but it is still the case.
Until this point in history there has been a reasonable correlation between the quality of writing and the "quality" of the underlying work in most text. If effort was put into the writing (and learning to write), that in itself used to be a decent indicator that effort and skill also was put into the data/ideas that the text conveyed.
That rule no longer works. People are still going to rely on it for a while though, and I'm worried it's going to break some stuff over the next few years.
I think probably the solution here is to stop using publishing in peer reviewed journals as a job requirement for academics. Even before ChatGPT, academic publishing was already falling victim to Goodhart's law. Remove the perverse incentives and it should cut down on this by quite a lot.
If I understand correctly, generally not from Starlink. Its orbit is too low; the satellites and their debris tend to end up pushed into the atmosphere eventually.