> Information Systems are super-useful for hardening and regularising processes so that they follow standards, run more quickly, cheaply and smoothly and take expensive, error-prone humans out-of-the-loop. The downside is that IS’s are fragile in the face of change.
Author’s view is that this is because of the way they grow and evolve, but to me it seems like it’s part of the nature of software.
AI shows promise in getting computers to behave more flexibly and naturally, and show “graceful degradation”.
I wonder if these techniques should/will ever be applied effectively to get us out of the mess we’re in now.
My view is that they’ll probably just add to the complexity and make things worse
I thought the whole point of go was to remove extraneous features and provide a completely minimal programming language. The argument being that all those extra features are a source of bugs.
Nevertheless, modules and try... it seems like it’s just an effort to add them all back in again.
Surely if this happens go will be indistinguishable from all the other languages it was designed to be different from?
Try was rejected and modules are awesome in my opinion. Easily the best dependency management I’ve worked with, across Ruby (Gems), Python (PIP), Java (Gradle/Maven) etc.
Up to this point, the ratio of useful enhancements to unnecessary cruft has heavily skewed towards usefulness.
The post links to a new questionnaire the core wants all change proposers to answer - the questions are pretty good and force a bit of intense soul searching for everyone asking for a change. That will probably stifle ideas a bit, but resisting change by default is what we want out of this language anyway.
Those are all awful dependency management examples.
I won't say Go dependency management is terrible, but it's certainly not awesome. At least to someone who has used PHP (Composer), Rust (Cargo), JavaScript (NPM), C# (NuGet).
Half the issues you mentioned are due to the ecosystem/community (one liner packages and deep dependency isn't fault of npm but it might be an unintentional result of how easy it was/is to publish and reuse packages) and the other half I don't notice by using yarn/pnpm.
Getting types is optional and only required if you use typescript which you don't have to. It does improve the editor experience for vanilla js but those are put under dev dependency.
There are a lot of things that can be improved though.
Lot of packages put their config inside package.json which is honestly messy. The whole script part is a bit restricting. Better approach would have been to follow how mix (elixir) does it.
Json is limiting as a format, no comments.
Like you mentioned, it inherits the mentality of js ecosystem. It doesn't feel part of node but a separate piece of its own.
Gems are fine-ish... they rubygems infrastructure is really slow, though. Maybe github packages will be better. And the ton or native code compilation sucks a bit, especially when compared to Go. CGO isn’t all roses, but it’s still a bit less common because you can get comparable performance with pure Go.
Why is gradle/maven significantly different than Nuget as far as dependency management goes? The only major difference is Gradle and Maven also handle a lot of the build management as well.
I was curious - what's wrong with Maven that is improved in Go modules?
I personally found Maven much more friendly, since there are no odd interactions with your source-control solution, you get all relevant details in the Maven pom.xml. I also find this idea of relying on semver, especially with Go's insistence on renaming packages for major version changes, to be very unpleasant and brittle, especially for internal packages.
Each library needs to be build a JAR, which isn't the case in Go - in Go you just put in the code URL (git repo / branch / sha1) and you have it. Also locks the dependency to that sha1, and crypto verifies it. So you get all the benefits of building an artefact, hash verification and central repo without having to do any of the work.
> in Go you just put in the code URL (git repo / branch / sha1) and you have it
It seems you're thinking of the old Go dependency management.
The current Go dependency management, Modules, means that you put in the unique name of the Module (which is its URL) and the Semver-compatible version. Then go mod can resolve the exact code you need.
Sure, it's still source-code based, so you don't need to build a JAR file. Of course, that also means you can't have external dependencies that you pull in - you must have everything required to build your go module in that source code.
Go mod also checks version incompatibilities based on Semver and chooses the lowest specific version which matches everyone's Semver dependency specification.
Your source-control server must also know how to act as a go mod repository (it needs to respond to some go mod specific HTTP calls, as far as I could tell).
Now, when you want to publish a new version in Go, you don't build and publish a specific build to some extra repository. Instead, you need to tag some commit in your repo with a Semver version.
If you want to publish a new major version, you need to do much more than that, since any version higher than v1 will impact the name of your package in import statements in Go code (import "github.com/mymod/mypack" will become import "github.com/mymod/mypack/v2" in any code using your module, including internally).
Actually I find it a disadvantage, I don't want to deal with source code from other teams/companies, rather binary artifacts totally separated from their toolchains.
I find it a huge advantage, in fact the lack of blobs may be my favorite go feature. I'm sure stodgy old companies that want to preserve "IP" through security through obscurity, dislike this model.
But as a dev? I can drill down to the core (C-b in Goland, jump to definition) of ANY import. Even the entire go toolchain is in go.
ABI/version management of artifacts is a nightmare, every single time.
To your points about integrity verification: maven does this too if you pass the `-C` flag.
One of the mayor benefits of maven/cargo repositories is that they are configured to be immutable. No issues with deleted repos or deleted/overwritten tags. Once your dependency is published it's there forever.
The idea of a “branch” is not particularly novel, and I think any useful VCS will have an equivalence. I’d be perfectly fine with relying on that as much as I do any other aspect of a package manager.
Isn't that much harder to do with Go mod than with Maven?
With Maven, you can define your own versioning scheme and easily include the branch as a component of the version "number".
In Go mod, as far as I can tell, you have to have a Semver vMAJOR.MINOR.PATCH version, which is much more difficult to adjust for short-lived branches.
Programming languages are products just like anything else in the software industry.
Either they adapt or eventually fade away.
Hence why we get this reboot cycles where new languages get introduced as revolution against the establishment, and a couple of years later are just as feature rich as the ones they were "fighting" against.
By "revolution against the establishment", I take you to mean "revolution against the complexity of existing tools". Meaning, a simpler tool. You can build a tool that will do 80% of what the existing tool does, with 20% of the complexity (or maybe even 90/10). And that's great... until you need the ability to do that last 10 or 20%. Then the simple tool has trapped you.
But by then, you've got a lot of code in the new tool. So what you want is a way to do whatever part of the last 10 or 20% of power that you need for your problem. "It's just a small addition!" But there's someone else who needs a different part of the last 10 or 20%, and wants to add that part...
And so you wind up with the new tool becoming as complex as the old tool. And then, as you say, the cycle repeats.
I think that if a tool is going to be an "80% of the power at 20% of the complexity" tool, and remain that, then it has to have an escape mechanism. You've written your 100,000 lines of simple code, and you need 50 lines in a more powerful tool, well, there's a clean way to use code written in a more powerful language for those 50 lines. Then the language can remain one that just has 20% of the complexity (if those in charge of the language can maintain their vision and their stubbornness).
One nice thing about Go is the existence of cgo. Yeah, it's discouraged, and rightly so, but you have that option. The ol' "Give it to C, C will do anything".
The other is IPC. Go is so dang easy at concurrency, managing data flow, async IO, etc, that I find it really lends itself to working as a cog in a larger machine, usually distributed. Don't like solving problem X in Go? Solve it however you want and just talk to your Go process.
So you have 2 escape hatches which were much less tenable as overall approaches even 10 years ago. So hopefully Go can stay lean and mean. I think it also helps that unlike other systems languages, Go doesn't have any intent on being a catch-all language. Graphics, hard real-time, drivers? You ain't gonna reach for Go. Light scripting, data science, machine learning? Also probably not Go.
Yes, I do agree, specially because those 80% are not the same for everybody when one starts to target the language into domains it wasn't originally thought of.
>Nevertheless, modules and try... it seems like it’s just an effort to add them all back in again.
Keyword is "extraneous features". 10 years of hard experience showed neither of modules nor a better error handling story (not necessarily "try") are "extraneous".
On the contrary, extraneous is what we get when everybody implements their own ad-hoc solution for those.
Same here. There are tons of other languages to choose from if you don't like what makes Go unique. People seem obsessed with making all languages homogeneous rather than appreciating what makes them different.
Doesn’t account for political problems, time management problems or the ever-present issue of people turning your estimate into a commitment, but sure.
On the other hand, as an individual working in IT I can achieve so much more than I could before... team sizes seem like they’re a lot smaller now as a result. That’s the upside, surely?
Is the financial crisis a good example here? A systemic failure of risk management caused by a bonus scheme that rewarded bad behaviour and led banks to over extend themselves. Seems like it was short-term rewards rather than any failure in understanding the risks per se. Lots of people called it
> Information Systems are super-useful for hardening and regularising processes so that they follow standards, run more quickly, cheaply and smoothly and take expensive, error-prone humans out-of-the-loop. The downside is that IS’s are fragile in the face of change.
Author’s view is that this is because of the way they grow and evolve, but to me it seems like it’s part of the nature of software.
AI shows promise in getting computers to behave more flexibly and naturally, and show “graceful degradation”.
I wonder if these techniques should/will ever be applied effectively to get us out of the mess we’re in now.
My view is that they’ll probably just add to the complexity and make things worse