Interesting, I didn't know minutes where free before.
Stopped my recurring subscription at the end of last year when it started spinning up actions for review. Which as a side effect doubled the time (or so) to do a review. Whereas before that I would open a PR, wait at most a minute or two and the review was already done.
I haven't thought about any secondary play, but if these companies converge on Google's TPUs, they would probably eagerly slice from NVIDIA's current market.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
I keep getting notification from my tooling that gemini models are overloaded so we switched you to openai. So I feel google is not ready to sell tpu’s just yet.
If you're building agentic processes (harnesses) for business processes local models are a great way to do that, while keeping your data, and any personal data, private.
If you're vibe coding a codex/claude subscription makes more sense as a more polished experience.
I don't vibe code, but I use self hosted models with codex for code review and snippet generation.
This is the LLM integration approach I was pitching last year to some companies. Though in my case it was strictly tied to self-hosted inference.
Agents at the edge of business where they can work independently, asynchronously, is an approach that I don't feel was explored enough in business environments.
Sending your entire communication and documents to OpenAI would be a very bold choice.
Not only are businesses already doing that - they're not even cleaning up their source material so LLMs are generating garbage outputs from the old inconsistent trash that haunts Confluence, Google Drive, and all of the other dumping grounds for enterprise ephemera. Oftentimes "AI transformation" is just a slightly better search engine that regurgitates your old strategy (that didn't work the first time) and wraps it up in new sycophantic language that C-levels use to bulldoze the budgets and timelines of actual skilled front line employees.
I do believe that LLMs and AI provide actual value, but the "workspace" is usually the passive aggressive CYA battleground for employees to appear productive in-spite of leadership's blind-spots, ossified business practices, and "aligned" decision-making that doesn't actually fix a broken org. Maybe this release will be the one that finally challenges nepo-hires, not-invented here, and all of the other corpo crap that defines "enterprise" business.
Cleaning up source material is not easy work in companies that have massive piles of it and don't exactly know which parts of it are wrong. Quite often these documents are poorly versioned and do work for something but not exactly what you're looking for.
With this said, you can use your incorrect AI answers to find and then purge or repair this old and/or poorly written documentation and improve the output.
I agree - and I've noticed that these AI transformations tend to lay bare the many issues, inconsistencies, and other problems with workspace functions and data. Unfortunately the people that are usually in charge of these projects do not have the seniority or sway to actually change the broken processes or aren't on the right team to remove cruft. Usually you have to wait until a salesperson misquotes something from an AI summary before these issues get unblocked because they actually affected revenue.
Free Monads are a very nice (though not performant) way of creating an embedded domain specific language interpreter.
Once I was building a declarative components library in PHP, using the ideas I've learned from free monads. I'm sure you can't imagine what an attrocity I've built. It did the job, but I had to mentally check out and throw in a couple of goto's in my main evalution loop.
All that to say that elegance of expressivity is tied to the syntax and semantics of languages.
Free Monads are also built on a tower of mathematical structures that come with laws and invariants. I have yet to see such formalization for transducers.
Featureatis. Just keep pumping out features with no thought. Today, probably also AI-coded .
Even in mid-sized projects if you keep pushing for only new features you'll get a similar system. At least my experience in 3 or so midsized projects that I've worked on where nothing else mattered than checking of features from a huge backlog.
Ah, been at a company like that once before. After a while a dedicated team was created to go in and fix broader issues and essentially stop the system from collapsing under its own weight.
Anyway, token economics wont't make sense to users, and if it is worth it, until they aren't all subsidized.
reply