This reminds me again of _Programming as Theory Building_[1] by Peter Naur. With agents fast generating the code, we lose the time for building the theory in our heads.
Shortening feedback loops was what Kent Beck and TDD advocates were emphasizing. Now TDD has been ruined by "experts", people are realizing the importance of fast feedback loops from a different perspective.
But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.
Peter Drucker popularized the phrase "Efficiency is doing things right; effectiveness is doing the right things."
Being a credibly efficient at doing the wrong things, turns out to be a massive issue inside of most companies. What's interesting is I do think that AI gives opportunity to be massively more effective because if you have the right LLM, that's trained right, you can explore a variety of scenarios much faster than what you can do by yourself. However, we hear very little about this as a central thrust of how to utilize AI into the work space.
In my experience plenty of places are quite inefficient at doing the wrong things as well. You might think this reduces the number of wrong things done, but somehow it doesn't.
It's almost comical isn't it, but it actually turns out that this is a big foundation behind behavioral economics. In essence you can get trapped in an upper level heuristic and never stop for a moment and thinks things through.
Another one of my favorite examples is that there is some research out of Harvard that basically suggested that if people would take and spend 15 minutes a day reviewing what they had done and what was important, they increased their productivity 22%. Now you would think that this is so obvious and so dramatic you would have variety of Fortune 500 companies saying "oh my goodness we want all of our workers to be 22% more productive" and so they would simply send out a memo or an email or some sort of process to force people to do some reflecting.
I would also suggest that Microsoft had a unique advantage based out of the idea that people should have their own enclosed workspace to do coding. This was deeply entrenched when Bill was running the company day-to-day. And I'm sure as somebody that was a coding phenomenon, it simply made sense to him. But academically, it also makes sense.
Microsoft has reversed this policy, but as far as I can tell, it doesn't have anything to do with the research. It has to do with statements about working together efficiently. or AI productivity. If there's real research then it's great.
My problem is it just doesn't appear to be any real research behind it. Yet I'm sure many managers at Microsoft thinks that it's very efficient. Of course, if you do know anybody at Microsoft that codes, they have their own opinion, and rather than me repeating hearsay, it would be fantastic to have somebody anonymously post what's really going on here. I'll betcha a nickel that 90% of them are not reporting that they feel a lot more effective.
That happens whether immutable or not. In the mutable world, you have to guard that using a mutex or something. In that case, operation 1 may be blocked by operation 2, and now you get a "stale" state from operation 2. But that's okay. You'll get a new state next time. The real problem occurs when two states are mixed and corrupted.
It's almost always npm packages. I know that's because npm is the most widely used package system and most motivating one for attackers. But still bad taste in my mouth.
Even OpenAI uses npm to distribute their Codex CLI tool, which is built in Rust. Which is absurd to me, but I guess the alternatives are less convenient.
This is why I don't run stdio MCP servers. All MCPs run on docker containers on a separate VM host on an untrusted VLAN and I connect to them via SSE.
Still vulnerable to prompt injection of course, but I don't connect LMs to my main browser profile, email, or cloud accounts either. Nothing sensitive.
If you used this package, you would still have been victim of this despite your setup. All your password reset or anything sent by your app BCC to the bad guy.
Here is hoping the above comment isn't upvoted to the point where it is portrayed as something like a "key takeaway" from the article. That would be missing the point.
[1] https://pages.cs.wisc.edu/~remzi/Naur.pdf
reply