They linked to the manual page, if you go to the root of the site, there's a link to a different domain which is the "Official authorised online store for all Cafelat items."
Even then, we're stuck with the root problem of LLM-based agents (i.e. the ones everyone is trying to use these days) being fundamentally untrustworthy and prone to going rogue.
That reminds me of a sci-fi quote, where one of the main characters is discussing a murderous antagonist, putting their evil into a broader context:
> "He was just a little villain. An old-fashioned craftsman, making crimes one-off. The really unforgivable acts are committed by calm men in beautiful green silk rooms, who deal death wholesale, by the shipload, without lust, or anger, or desire, or any redeeming emotion to excuse them but cold fear of some pretended future. But the crimes they hope to prevent in that future are imaginary. The ones they commit in the present--they are real."
To me the what we wanted/got distinction is something like:
1. A kind of capital that is widely available, so that people could exercise control and agency with machines that do what you want them to do for your own needs.
2. A distribution tool controlled by mega-corporations as they decide what you should be able to see or have.
Related concept: Unaccountability machines [0] where the system (electronic or organizational) mainly exists to make things nobody's fault.
There's Discworld bit [1] that often comes to mind for me, where the protagonist is reading a press-release by a corporate communications monopoly
> The Grand Trunk’s problems were clearly the result of some mysterious spasm in the universe and had nothing to do with greed, arrogance, and willful stupidity. Oh, the Grand Trunk management had made mistakes—oops, “well-intentioned judgments which, with the benefit of hindsight, might regrettably have been, in some respects, in error”—but these had mostly occurred, it appeared, while correcting “fundamental systemic errors” committed by the previous management. No one was sorry for anything, because no living creature had done anything wrong; bad things had happened by spontaneous generation in some weird, chilly, geometric otherworld, and “were to be regretted.”
It's kind of a "code that gets the immediate result you want" versus "code that puts the developer in the right headspace for maintaining it" thing.
Ultimately you're not conversing with any real LLM, it's iterative document generation where humans perceive fictional characters in the output. If the text you contribute says "You're just an {Noun}" that's that's shaping the document output based on what got trained in relation to {Noun}.
Which may eventually backfire, when the (real) LLM gets trained on documents such as blog-posts like "The moment you realize the {Noun} is retarded."
It's often buried because the people making money dislike it, so much so that they will lobby the government to impose wide bans. Especially if:
* The ban makes somebody else pay most of the costs of protecting "the children" against their design-choices or business-model.
* The ban gives them a blanket pass for almost any exploitative design against adults or other acceptable targets.
reply