Such a fun site back in the day, and novel. There's the Erdos number version for academia.
There was that other site that tried to guess what you were thinking. It'd ask you a series of questions to try and guess --- that one was eerily good at it, too. Feels like a similar sort of thing: how quickly you can converge on a solution.
Having said that, that article seems more interested in talking about things adjacent to the Oracle of Bacon and what its author finds interesting rather than the site itself. Not sure why?
Indeed. I've gradually adapted a server rendered jquery and HTML site to react by making react render a component here and there in react and gradually convert the site. Works great.
Yeah I still consider Solid Edge very good. Easy to work with, does not require internet, no stupid limitations (like the 10 model limitation for Fusion). Many tutorials, etc. But still, they might revoke their free license at any moment and I am out of a tool, and wasted experience.
Sure it's $24/hour, but it'll crank through tens of thousands of tokens per second --- those beefy GPUs are meant for large amounts of parallel workflow. You'll never _get_ that many tokens for a single request. That's why the mathematics work when you get dozens or hundreds of people using it.
No. The sauce is in KV caching: when to evict, when to keep, how to pre-empt an active agent loop vs someone who are showing signs of inactivity at their pc, etc.
I mean, is it possible the latter models used Search? Not saying Stepfun's perfect (it is not.) Gemini especially and unsurprisingly uses search a lot and it is ridiculously fast, too.
Incidentally, telling an AI you want to talk socratically and never to reveal the outright answer unless asked is a fantastic way to learn.
You can dial in on the difficulty: "you must be pedantic and ask that I correct misuse of terminology" vs "autocorrect my mistakes in terminology with brackets".
Super duper useful way to learn things. I wish I had AI as a kid.
What codex often does for this, write a small python script and execute that to bulk rename for example.
I agree that there is use for fast "simpler" models, there are many tasks where the regular codex-5.3 is not necessary but I think it's rarely worth the extra friction of switching from regular 5.3 to 5.3-spark.
The winning strategy for all CI environments is a build system facsimile that works on your machine, your CI's machine, and your test/uat/production with as few changes between them as your project requirements demand.
I start with a Makefile. The Makefile drives everything. Docker (compose), CI build steps, linting, and more. Sometimes a project outgrows it; other times it does not.
But it starts with one unitary tool for triggering work.
This line of thinking inspired me to write mkincl [0] which makes Makefiles composable and reusable across projects. We're a couple of years into adoption at work and it's proven to be both intuitive and flexible.
Because, in 2026, most build tools still aren't really all that good when it comes to integrating all the steps needed to build applications with non-trivial build requirements.
And, many of them lack some of the basic features that 'make' has had for half a century.
Ye, kick off into some higher-level language instead of being at the mercy of your CI provider's plugins.
I use Fastlane extensively on mobile, as it reduces boilerplate and gives enough structure that the inherent risk of depending on a 3rd-party is worth it. If all else fails, it's just Ruby, so can break out of it.
Make is incredibly cursed. My favorite example is it having a built-in rule (oversimplified, some extra Makefile code that is pretended to exist in every Makefile) that will extract files from a version control system.
https://www.gnu.org/software/make/manual/html_node/Catalogue...
What you're saying is essentially ”Just Write Bash Scripts”, but with an extra layer of insanity on top. I hate it when I encounter a project like this.
You still get bash scripts in the targets, with $ escape hell and weirdness around multiline scripts, ordering & parallelism control headaches, and no support for background services.
The only sane use for Makefiles is running a few simple commands in independent targets, but do you really need make then?
(The argument that "everyone has it installed" is moot to me. I don't.)
I agree, but this is kind of an unachievable dream in medium to big projects.
I had this fight for some years in my present work and was really nagging in the beginning about the path we were getting into by not allowing the developers to run the full (or most) of the pipeline in their local machines… the project decided otherwise and now we spend a lot of time and resources with a behemoth of a CI infrastructure because each MR takes about 10 builds (of trial and error) in the pipeline to be properly tested.
It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call. Some things just don't run on a local machine: fair. But a lot of things do, even very large things. Things can be scaled down; the same harnesses used for the development environment and your CI environment and your prod environment. You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.
Yes, there will always be special exemptions: they suck, and we suffer as developers because we cannot replicate a prod-like environment in our local dev environment.
But I laugh when I join teams and they say that "our CI servers" can run it but our shitty laptops cannot, and I wonder why they can't just... spend more money on dev machines? Or perhaps spend some engineering effort so they work on both?
> You don't need a full prod db, you need a facsimile mirroring the real thing but 1/50th the size.
My experience has been that the problems in CI systems come from exactly these differences “works on my machine” followed by “oops, I guess the build machine doesn’t have access to that random DB”, or “docker push fails in our CI environment because credentials/permissions, but it works when I run it just on my machine”
> It's not an unachievable dream. It's a trade-off made by people who may or may not have made the right call.
In my experience at work. Anything that demands too much though, collaboration between teams and enforcing hard development rules, is always an unachievable dream in a medium to big project.
Note, that I don't think it's technically unachievable (at all). I just accepted that it's culturally (as in work culture) unachievable.
But it isn't a question of security. The project would very much like the developers to be able to run the pipelines on their machines.
It's just that management don't see it as worth it, in terms of development cost and limitations it would introduce in the current workflow, to enable the developers to do that.
Portability across Lisp dialects is usually not a thing. Even Emacs Lisp and Common Lisp which are arguably pretty close rarely if ever share code.
You could make a frontend for dialect A to run code from dialect B. Those things have been toyed with, but never really took off. E.g. cl in Emacs can not accept real Common Lisp code.
I'm not arguing against the idea, I'm just curious how it would work because I see no realistic way to do it.
Lisp dialects have diverged quite a bit, and it would be a lot of work to bridge the differences to a degree approaching 100%. 90% is easy, but only works for small trivial programs.
I say this, having written a "95%" Common Lisp for Emacs (still a toy), and successfully ran an old Maclisp compiler and assembler in Common Lisp.
There was that other site that tried to guess what you were thinking. It'd ask you a series of questions to try and guess --- that one was eerily good at it, too. Feels like a similar sort of thing: how quickly you can converge on a solution.
Having said that, that article seems more interested in talking about things adjacent to the Oracle of Bacon and what its author finds interesting rather than the site itself. Not sure why?
reply