You're wasting a ton of tokens doing that though. Right now you don't realize it because they're being heavily subsidized, but you will understand the point of have good orchestration and memory files when you will have to pay the real cost of your use.
Cost cannot go up, only down with time (with occasional short term fluctuations). Competition, including open weight models and consumer hardware (ie upcoming M5 Ultra) keeps moving ceiling of what you can charge down.
I think their current goal is to capture as much market as they can while they still have the best models, their only moat. Look at Anthropic, they are clearly trying to lock their users in their ecosystem by refusing to follow conventions (AGENT.md etc) and restricting their tools exclusively to their own services.
While some of the ideas in this do resonate with me (or at least they're entertaining), it's unfortunate that's it's so obviously LLM generated. And some parts of it, like the INTJ exceptionalism, reek of LLM sycophancy, which then turned into to some kind of god complex...
i just actually read that and it is possibly the most morally abominable screed I've come across in a long time. Shocking that its acceptable to share in polite company
That's really bad... I don't care if people (probably LLM here) do these kind of mistakes in their own personal tooling. But when you're going to distribute it as some sort of library, it becomes unacceptable.
Write public libraries for solving issues of domains you are an expert in. If your library is LLM generated, it is most likely useless and full of errors that will waste other people's time and resources.
Is burning the coal, delivering the electricity, and storing it in a battery that's then converted to mechanical motion more efficient than an ICE? What are the losses in delivery and storage?
there are yes, but it is still more efficient than an ICE engine. Not going to enumerate that here because that was a discussion to be had in 2010 and I am bloody tired of it.
I think curing GAD will mean changing your personality. There's always going to be a before/after you, that's the whole point. The important part is being able to reliably know what the "after you" will be so you can be sure that you want that change to happen.
Curing anything changes your personality. I stopped biting my nails to the quick after 50 years - that's a difference!
The Ship of Theseus argument should never be used to justify retaining mental dysfunction. "What if I can't paint sunflowers if I stop being suicidal?" is a question; more decades of Van Gogh paintings would inarguably have been better.
I took 99% reliable as meaning not having to repeat the command, which given that Siri is something like 50% reliable by that metric, 99% sounds like heaven.
In those cases yeah, 99% isn't reliable enough. I'm not going to tolerate having power down for 3 days out of the year. But in fairness, home automation is less critical than that so 99% reliability is still acceptable to me. I don't think LLMs are anywhere near that, though, nor is there any sign of them getting there any time soon. So it does concern me to use an LLM as the backbone of home automation.
It is an adjustment coming from deterministic software and adding non-deterministic software to it, which can be improved by the quality of language and input into it.
reply