What's the model of the pinning? Is it an arbitrary semver? Is it a sha256 calculated by the content of the package (Content Addressed)? Is it a checksum calculated from all the inputs that were used to create the package install spec (Input Addressed)?
At best, you can do something like this: apt-get install gparted=0.16.1-1
That handles the semver case, but that doesn't address the rest of the "all of the above":
- sha256 calculated by the content of the package (Content Addressed)
- checksum calculated from all the inputs that were used to create the package install spec (Input Addressed)
> Nix isn’t the first package manager to do something unique.
Sure. But it is the first package manager to do the above unique things. After all, the reason for Nix's existence is just that: no other package manager before it has done what it does.
Are you using bazel as your build system? If not, for almost all languages, your build system itself is probably not reproducible. Gcc uses random numbers. Lots of compilers add timestamps to builds (javac does).
For almost all non-Google use cases, repeatable is generally good enough.
It's reproducible enough for most purposes most of the time. When is the last time your Python web app serving cat pictures went down because of a subtle change in the Debian Bookworm container?
I was cobbling together build scripts for a mail system for Raspi/Armbian the last couple weeks. Very similar packaging stuff, but the number of little subtle differences in install/postinstall/prerm/postrm scripts took a generous level of spackling over to get just right.
Hell, once I get everything nailed down, I'm writing test frameworks for my bloody build scripts if you can believe it.
Same with static types. Sometimes it really helps to limit yourself, other times it's setting up unnecessary obstacles with no benefit. Sometimes you want Idris 2 or Rust, and sometimes you want Ruby or Clojure.
Those sound the same to me. Are you drawing a distinction between a repeatable/reproducible process vs result? Like if you run the same command to fetch a dependency, you wind up with different results if the dependency maintainer releases a new update and you aren’t pinning the dependency?
To use the apt analogy further up in the thread, `sudo apt install git` is repeatable in your dockerfile, but often not reproducible. Later on you will get a different build. Across say 500 packages and 1,000,000 containers (or say 1000 container images if you are deploying images) over even a week this becomes extremely... varied...
On smaller scales, this is perfectly fine. How often does Git actually release breaking changes of features that you actually need to use inside your Dockerfile? How often does Debian pull in such a version into their stable OS? And why didn't you just version-pin Git like Hadolint told you to do?
Exact reproducibility is nice for two scenarios: 1) academic research, and 2) very large-scale applications and deployments. For regular people writing boring small web apps, choosing a stable base image and pinning dependencies is good enough.
Consider also that your preferred programming language will also very likely not provide particularly reproducible package builds.
I've never looked seriously into it, but my feeling is that distros will delete old versions as newer ones are uploaded: When I run "apt-cache policy git" in my Ubuntu, I only see a couple versions available to install, often other packages show only a single one (so, the latest).
I know that Debian has Snapshot for older packages but you are still at the will of other people and people are fickle, and Nix should allow you to use specific versions to build your base images from to pin to.
However, much in the same way that if you actually take your build system seriously you'll store your application dependencies in a local proxy, you can run a mirror or proxy to hold these historical packages too.
Take a look at something like apt cacher, however it is a proxy cache so you can reproduce builds using the exact same package versions but if upstream delete old packages, and you want to roll back to one you haven't previously downloaded, then you are out of luck.
Yes. This can be a problem in scientific research involving numerical code or random number generation, where results can vary even due to small inconsequential-seeming changes, leading to your results not being reproducible scientifically because they're not reproducible computationally.
This is keeping me from immediately jumping on Nix.
I'd rather use guix and scheme, but it's behind. If I learn nix, and then guix some day catches up, I'll already know nix and won't want to make the change anymore. Can't say I like the outlook for guix.
Interesting. I've been writing Bash scripts for over 20 years and despite my comfort with it, I'm always exploring avenues of replacing it. Despite the time investment already spent, I know there are far better tools out there to do the same jobs.
NixOS doesn't follow the UNIX philosophy in many ways, especially when it comes to systemd. If you want NixOS you will have to use systemd and embrace it. Otherwise you can go the Guix route and try to make the most out of the much inferior ecosystem.