I think that misses the point - it's a little bit like asking why FPS game developers don't lean into aimbot usage. You could, but by default it's a bit boring, and a different type of game.
This is something I'm very interested in implementing for Docker builds. I've tested out CDC for the final image outputs, it results in smaller outputs but requires tuning between saved space versus request count when pulling. For build cache it might be even more advantageous.
Docker is...quite slow with large images. I've built a registry+pull client+buildkit builder to make it better. It splits apart layers, allowing for files to be shared between related images. In a robotics context, it can make pulls 10x faster. And in a cloud context, the format allows for pulling an image in 15 or 20 seconds instead of 60, without having to do a FUSE w/lazy pulling. Builds are faster, I store 7x less data due to better deduplication, I can run security scans faster due to not having to unpack tarball layers, etc, etc. I want to be the default registry for all ML related work, in the future.
After his release, Juettner briefly achieved celebrity status. His notoriety became so widespread that Hollywood adapted the story into the 1950 film Mister 880, directed by Edmund Goulding. Eventually, Juettner made more money from the release of Mister 880 than he had made by counterfeiting.
Maybe I'm too dumb, but I haven't figured out a good way to sign just a binary (or a tar/zip containing a few binaries). I zipped up the binaries, sent them off to Apple, Apple comes back and says "yup, notarized!", and they still trigger the popup. I'm probably missing a step. I guess I'm not currently stapling the ticket to the binary, but supposedly you don't have to if you are running with a network connection.
> I guess I'm not currently stapling the ticket to the binary, but supposedly you don't have to if you are running with a network connection.
AFAIK, you do in fact have to staple the ticket. The other thing I found is that you have to make sure you're using the right kind of certificate from Apple.
Theres two different steps, there is signing and there is notarization. You sign with the developer certificate using productsign/codesign, and then there is notarization, which you use notarytool to submit your signed binaray to apple to notarize.
finally you then take their response and staple it to your binary. Its a lot of steps.
Going to self promote one last time here - I've built a fix for this, at least for the registry/image export side, at https://clipper.dev. Docker(Hub) can't share large files between layers, but I can.
Zstd for example only promises determinism on the same version of the library. I've personally seen the hashes mutate between pull and export. Things like tar padding also make a difference. Really, the thing to do is to hash on the _uncompressed_ data and let compression be a transport/registry detail. That's what I've done, at least.
Yes, compression being part of the OCI image's digest was (in hindsight) a poor decision. _Technically_ OCI images allow uncompressed layers, and the layers could be included without compression (and transport compression to be used); this would allow layers to be fully reproducible. We explored some options to do this (and made some preparations; https://github.com/containerd/containerd/pull/8166), but also discovered that various implementations of registry clients didn't handle transport-compression correctly (https://github.com/distribution/distribution/pull/3754), which could result in client either pulling the full, uncompressed, content, or image validation failing.
For my registry fork/custom pull client I hash on the uncompressed content and store as compressed under the uncompressed digest. This lets me have my cake and eat it, too - compression free digests, smaller storage costs, be able to set consistent compression settings, have the ability to spend extra CPU to recompress on the backend without breaking hashes, etc. I control both pull client and registry, so it works.
reply