Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, AI tools can do this. However, VS Code is the platform. Why aren't more people worried about running arbitrary VS Code extension that can do the same thing, AI or not?




As an VSCode extension author, I am always terrified by the amount of power I have.

It is a shame that the team never prioritized extension permission issues [0] despite their big boss said security is the top priority [1]. All they have is "workspace trust" and various other marginally useful security measures.

I don't install a VSCode extension unless it is either official or well known and audited and I have to use it. I keep most of them disabled by default unless I need something for a project. (Even if you don't care about security, it's good for VSCode performance. I'll save that story for another day.)

[0] https://github.com/microsoft/vscode/issues/52116

[1] https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...


When some minor extension that I have installed on VSCode updates (like parens colorizing and the like) I think what could happend if the author sells it to some bad actor (or decides to push some weird code in an update).

So I started uninstalling some icon themes and less used extensions that I installed on a whim years ago.

I implicitly trust extensions by Google, Microsoft and the like, but the less known published make me nervous.


It doesn't even have to be malicious. I used a certain syntax highlighting theme for years, when out of nowhere the author pushed an update that rearranged all the colors. It was extremely disorienting. I forked the extension and reverted the change, so I know that one at least won't change out from under me anymore.

This is the thing I hate the most about "automatic updates" in general. I've disabled them and gone back to updating manually because the constant unexpected and unwanted UI changes finally broke a part of my soul. Unfortunately that is something that can't be done on the web, where major UI changes can be rolled out right in the middle of a session on you.

> As an VSCode extension author, I am always terrified by the amount of power I have.

Meanwhile random FOSS projects be like "please sudo curl bash to install the prebuilt binaries".


Most don't even use functions when writing those scripts and it can straight up fuck your system on accident. It's very unlikely but it can happen and a malicious actor can trigger it on purpose.

But this is true about lots of code. We have this notion of "it works, therefore there's no problem" which is just bad engineering. Just because you don't know there's a problem doesn't mean there isn't. Just because it passes the tests doesn't mean you have test coverage.


How did that even get started? It’s not like downloading a zip or tar file is so terribly taxing.

Convenience, mostly.

  curl -L "foo.sh" -o foo.sh && bash foo.sh
Is just more characters. But you should do it simply because a poorly written bash script can accidentally mess you up when streaming.

Why sudo though?

I honestly think it's stupidity. Most people really don't know you can build programs to the user and don't need system privileges. I think everyone is just so used to installing from package managers and doing `sudo make install` that they forgot programs only need to be in $PATH and not /usr/bin


How is that any safer?

First off, I never used sudo...

Second off, you're not steaming into bash

Third, you gotta read between the lines a little. I used some convenience considering my audience is programmers. Don't use && or shove && `less foo.sh` in the middle. There's a million options here


Don't take the example overly literally. Saving to file means you can read it before executing it.

That aside, it protects you from this gaping hole of an exploit mechanism. https://news.ycombinator.com/item?id=17636792


This is one of my pet peeves! No one should normalize the idea of piping curl output to bash.

I agree. Sadly most of us aren't going to build from source, and some tools don't really work without sudo. (Did I mention VSCode? On Linux you get a .deb file. Yeah.)

In practice, building from source is not going to fix the problem. Nobody reads the source code of projects they download and compile themselves, certainly not for larger projects. It also takes a long time to compile larger projects. So, realistically, these rarely happen.

Of course, the one advantage of having source is that it is easier to run things like SAST tools against source, but how many people do that in practice? How integrated is that with package systems? And when package maintainers might provide hashes of what they ostensibly checked, you still need trust.

So we need a combination of static analysis tools that are integrated properly to produce trusted binaries, and you need earned trust and authority. Hyperindividualist self-reliance is, at the very minimum, impractical. And with authority, we know whose job it is to care for the quality of software and therefore whom to hang.


> building from source is not going to fix the problem. Nobody reads the source code of projects they download and compile themselves

However commits tend to be much easier to trace at a later date than arbitrary binaries so attackers will be less inclined to go that route. Once committed it's there forever unless you can somehow get everyone to censor it from their own copies for an unrelated reason. Consider that the xz compromise involved downloading the payload later.

My policy is to either obtain binaries from a major distro or to build from a clean commit in a network isolated environment. If I can't go one of those routes it's almost always a hard pass for me.


The situation is absolutely insane, but it's also productive, but real security would slow everything down a lot. The moment you ask some corporate bureaucrat to put their signature down on a piece of paper saying that such and such dev tool is approved for use, they're going to block everything to avoid the responsibility implied by their approval. I can't really come up with a system that both works and is secure. The only exception is signing up for an integrated environment where Microsoft or Apple provides the OS, compiler, and editor. Oops - Apple doesn't sell servers, so only Microsoft offers this. Hope you like C#.

In theory you can mix and match, but in practice most bureaucrats will insist on single-sourcing.


Linux development has a blueprint they could follow. Like the principle of least privilege. These aren’t cutting edge concepts.

Also I’m not sure the tradeoffs of adding security to an editor are that big of a deal. Are we really seeing revolutionary stuff here? Every now and then I check out VS Code only to realize Vim is still 10x better.


Vim is hardly secure either. Extensions in both provide for arbitrary code execution.

At the company I work for they locked down installing extensions through the marketplace. Some are available but most are not and there is a process to get them reviews and approved. You might be able to side load them still but I haven’t cared enough to want to try.

They did the same with Chrome extensions.


Same thing for browser extensions: a simple browser extension (e.g. web dark mode), can read all your password fields. It's crazy that there are no proper permission scopes in any major browsers ! It would have been so easy to make password / email fields exempt from browser extensions unless they ask for the permission.

Pro tip: I’ve seen plenty of dedicated extensions that could have just been simple snippet equivalents in Tampermonkey - an extension that lets you run JS limited to wildcarded websites.

I've used it to inject download links on sites, autoclose modals, etc. You can either write them yourself, or review other people before installing them.

It’s not a perfect solution, but at least it reduces the surface area to a single extension.

FYI: Just set Script Updates to Never.

https://github.com/Tampermonkey/tampermonkey


I do not think it'd be "so easy" to separate password input access into a separate permission because it'd only open up a can of worms. There's so many ways to read a password input's value, from listening to key events to monkey patching `fetch`, that it's not worth playing whack-a-mole just to provide users a false sense of security

I'm also skeptical that even a dark mode extension would be simple considering how varied web pages can be


In your example wouldn't that leave the email and password fields the wrong color? I agree with the principle though. Most extensions don't need to access everything.

Installing any 3rd party dev dependency without sandboxing should terrify you. These supply chain attacks are not hypothetical.

Trusting other devs to not write malicious code has led to a surprisingly small number of incidents so far, but I don't think this will extrapolate into the future.

With more lines of code being auto-written without deliberate intent or review from an accountable author, things can only get worse!


Yes, exactly. The lack of any sort of permission controls for extensions in VS Code gives me the creeps

I am (am worried) and recently stopped adding extensions by just the random anon. Also I take time to sanitise foreign (to my knowledge) gh repos using Claude code.

As an aside, claude and codex (and probably gemini) are pretty good at doing that. I've now done it with several repos and they are pretty good at finding stuff. In one case codex found an obscure way to reach around the authentication in one of our services. This is a great use case for LLMs IMHO

They are (of course) not foolproof and very well may miss something, so people need to evaluate their own risk/reward tradeoff with these extensions, even after reviewing them with AI, but I think they are pretty useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: