That wouldn't be very portable. A benefit of committing to your history is that it lives with the code no matter where the code or the AI service you use goes.
That's true. I was thinking of it as being more like how LFS works since presumably the LLM contexts could be large and you wouldn't necessarily want all of them on every clone.
I've generally been in the squash camp but it's more out of a sense of wanting a "clean" and bisectable repo history. In a word where git (and git forges) could show me atomic merge commits but also let me seamlessly fan those out to show the internal history and iteration and maybe stuff like llm sessions, I'd be into that.
And yes, it's my understanding that mercurial and fossil do actually do more of this than git does, but I haven't actually worked on any projects using those so I can't comment.
I feel like this distinction isn't made often or clearly enough. AI as a superpowered autocomplete is far more useful to me than trying to one-shot entire programs or modules.
Agreed. I'd also add that I have varying levels of watchfulness: paid work I inspect (and understand) every line, and iterate. JS for my blog, I inspect. Throwaway scripts, I skim.
A lot more men than women are able to be content with the comfortable mediocrity that is bringing in the paycheque, doing the chores, getting laid once or twice a month, but otherwise not really feeling much passion or enthusiasm or joy with their partner.
It's not the life you hope for, but there's a lot of social messaging that that's just the way it is, it's what you signed up for, you would be selfish to leave, the grass won't be greener, and also it's probably your fault anyway for not being a better husband. The messaging to women in romcoms and the like is much more toward you deserve better, be brave, junk the loser, go get the life you want.
As a guy who was in a mediocre marriage like this for many years, I basically got my emotional needs met elsewhere: through work, family, friends, time and activities with my kids, etc.
Of course they're correlated but it's obvious to anyone who has had a long term relationship unravel that the causes are always complicated and multi-layered.
I (man) was the one who pulled the trigger on my divorce but that followed years of conflict and withdrawing from both sides and ultimately you can point to specific milestones (who killed the bedroom, who opened a separate bank account first, who stepped out first, who wouldn't come back to counselling) but it's actually better for healing not to be preoccupied with the blame game and instead focus on where one's own growth opportunities are.
It's also a very clear differentiator for them relative to Google, Facebook, and OpenAI, all of whom are clearly varying degrees of willing to sell themselves out for evil purposes.
It will also cost openai dearly if they don't communicate clearly, because I for one will internally push to switch from openai (we are on azure actually) to anthropic. Besides that my private account also.
Given the history of US military adventurism and that we’re about to start another completely unjustified war of aggression against Iran, yes. Absolutely yes.
If it wasn't for US military power, Russia would have already overrun Ukraine. And if Iranian nuclear program is destroyed and the regime falls, it would be a good thing. For context, I'm from Czechia.
I'm from the US and strongly disagree that either of those things are a benefit to me as a US citizen. All it's doing is taking my money and putting me more at risk, and in the case of the attack on Iran: making me complicit in the most immoral acts imaginable.
> What about all the weapons forbidden by the Geneva convention?
Some weapons are prohibited Geneva convention because they are designed to cause suffering or indiscriminately kill non-combatants:
"Weapons prohibited under the Geneva Convention and associated international humanitarian law (including the 1925 Protocol, CCW, and specific treaties) include chemical/biological agents (mustard gas, sarin), blinding lasers, expanding bullets, and non-detectable fragments. Also banned are anti-personnel landmines and cluster munitions.
Key prohibited and restricted weapons include:
Chemical and Biological Weapons: The 1925 Geneva Protocol and subsequent conventions (1972, 1993) banned the use, development, and stockpiling of asphyxiating, poisonous, or other gases, including nerve agents and biological weapons.
Blinding Laser Weapons: Specifically designed to cause permanent blindness (Protocol IV of the CCW).
Non-detectable Fragments: Weapons designed to injure by fragments not detectable in the human body by X-rays (Protocol I of the CCW).
Incendiary Weapons: Restrictions on using fire-based weapons (like flamethrowers) against civilian populations (Protocol III of the CCW).
Anti-personnel Landmines: Banned under the Ottawa Treaty (1997) due to risks to civilians.
Cluster Munitions: Prohibited due to their indiscriminate nature.
These treaties aim to protect civilians and combatants from unnecessary suffering and long-term danger."
Would "good hands" choose weapons that are designed to cause suffering or that kill indiscriminately?
As someone who has worked in the space for a while and been heavily exposed to nix, bazel, cmake, bake, and other systems, and also been in that "passion project" role, I think what I've found is that these kinds of systems are just plain hard to talk about. Even the common elements like DAGs cause most people's eyes to immediately glaze over.
Managers and executives are happy to hear that you made the builds faster or more reliable, so the infra people who care about this kind of thing don't waste time on design docs and instead focus on getting to a minimum prototype that demonstrates those improved metrics. Once you have that, then there's buy-in and the project is made official... but by then the bones have already been set in place, so design documentation ends up focused on the more visible stuff like user interface, storage formats, etc.
OTOH, bazel (as blaze) was a very intentionally designed second system at Google, and buildx/buildkit is similarly a rewrite of the container builder for Docker, so both of them should have been pretty free of accidental engineering in their early phases.
I don't think you can ever get away from accidental engineering in build systems because as soon as they find their niche something new comes along to disrupt it. Even with something homegrown out of shell scripts and directory trees the boss will eventually ask you to do something that doesn't fit well with your existing concepts.
A build system is meant to yield artifacts, run tools, parallelize things, calculate dependencies, download packages, and more. And these are all things that have some algorithmic similarity which is a kind of superficial similarity in that the failure modes and the exact systems involved are often dramatically different. I don't know that you can build something that is that all-encompassing without compromising somewhere.
Blaze and bazel may have been intentionally designed, but it was designed for Google's needs, and it shows (at least from my observations of bazel, I don't have any experience with blaze). It is better now than it was, but it obviously was designed for a system where most dependencies are vendored, and worked better for languages that google used like c++, java, and python.
Blaze instead of make, ant, maven. But now there's cmake and ninjabuild. gn wraps ninjabuild wraps cmake these days fwiu.
Blaze is/was integrated with Omega scheduler, which is not open.
Bazel is open source.
By the time Bazel was open sourced, Twitter had pantsbuild and Facebook had buck.
OpenWRT's Makefiles are sufficient to build OpenWRT and the kernel for it. (GNU Make is still sufficient to build the Linux kernel today, in 2026.)
Make compares files to determine whether to rebuild them if they already exist; by comparing file modification time (mtime) unless the task name is in the .PHONY: list at the top of the Makefile. But the task names may not contain slashes or spaces.
`docker build` and so also BuildKit archive the build chroot after each build step that modifies the filesystem (RUN, ADD, COPY) as a cacheable layer identified by a hash of its content.
The FROM instruction creates a build stage from scratch or from a different container layer.
Dockerfile added support for Multi-stage builds with multiple `FROM` instructions in 2017 (versions 17.05, 17.06CE).
`docker build` is now moby and there is also buildkit? `podman buildx` seems to work.
nerdctl supports a number of features that have not been merged back to docker or to podman.
> it obviously was designed for a system where most dependencies are vendored, and worked better for languages that google used like c++, java, and python.
Those were the primary languages at google at the time. And then also to build software? Make, shell scripts, python, that Makefile calls git which calls perl so perl has to be installed, etc.
>> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).
Which CPU microarchitectures and flags are supported?
AVX-512 is in x86-64-v3. By utilizing features like AVX-512, we would save money (by utilizing features in processors newer than Pentium 4 (x86-64-v1)).
How to add an `-march=x86-64-v3` argument to every build?
How to add build flags to everything for something like x86-64-v4?
Which distros support consistent build parametrization to make adding global compiler build flags for multiple compilers?
- Gentoo USE flags
- rebuild a distro and commit to building the core and updates and testing and rawhide with your own compiler flags and package signatures and host mirrored package repos
- Intel Clear Linux was cancelled.
- CachyOS (x86-64-v3, x86-64-v4, Zen4)
- conda-forge?
Gentoo:
- ChromiumOS was built on gentoo and ebuild IIRC
- emerge app-portage/cpuid2cpuflags, CPU_FLAGS_X86=, specify -march=native for C/[C++] and also target-cpu=native for Rust in /etc/portage/make.conf
The ansible-in-containers thing is very much an unsolved problem. Basically right now you have three choices:
- install ansible in-band and run it against localhost (sucks because your playbook is in a final image layer; you might not want Python at all in the container)
- copy a previous stage's root into a subdirectory and then run ansible on that as a chroot, afterward copy the result back to a scratch container's root.
All of these options fall down when you're doing anything long-running though, because they can't work incrementally. As soon as you call ansible (or any other tool), then from Docker's point of view it's now a single step. This is really unfortunate because a Dockerfile is basically just shell invocations, and ansible gives a more structured and declarative-ish way to do shell type things.
I have wondered if a system like Dagger might be able to do a better job with this, basically break up the playbook programmatically into single task sub-playbooks and call each one in its own Dagger task/layer. This would allow ansible to retain most of its benefits while not being as hamstrung by the semantics of the caller. And it would be particularly nice for the case where the container is ultimately being exported to a machine image because then if you've defined everything in ansible you have a built-in story for freshening that deployed system later as the playbook evolves.
I introduced Depot at my org a few months ago and I've been very happy with it. Conceptually it's simple: a container builder that starts warm with all your previously built layers right there, same as it would be running local builds. But a lot goes into making it actually run smoothly, and the performance-focused breakdown that shows where steps depend on each other and how much time each is taking is great.
It's clear a ton of care has gone into the product, and I also appreciated you personally jumping onto some of my support tickets when I was just getting things off the ground.
Thank you for the very kind words and for your support. Depot is full of incredible people who love helping others. So while you might see me on a ticket from time to time, it’s really an entire team that is behind everything we do.
Wasn't the 2DS just a 3DS minus the lenticular screen, and especially minus the front-facing camera that did face tracking to improve the quality of the 3D?
My understanding was that market research showed a lot of users were turning off the 3D stuff anyway, so it seemed reasonable to offer a model at lower cost without the associated hardware.
> My understanding was that market research showed a lot of users were turning off the 3D stuff anyway
It was also because young children weren't supposed to use the 3D screen due to fears of it affecting vision development. You could always lock it out via parental controls on the original, but still that was cited as a reason for adding the 2DS to the lineup.
> Fils-Aime said. “And so with the Nintendo 3DS, we were clear to parents that, ‘hey, we recommend that your children be seven and older to utilize this device.’ So clearly that creates an opportunity for five-year-olds, six-year-olds, that first-time handheld gaming consumer."
Chat-Session-Ref: claude://gjhgdvbnjuteshjoiyew
Perhaps that could also link out to other kinds of meeting transcripts or something too.
reply