I built a little Jellyfin plugin for KOReader [1] so I can access my books from my Kindle. Jellyfin proved really nice to work with (though there was some poorly documented auth stuff they were in the process of deprecating).
If anyone has been thinking of building something in the Jellyfin ecosystem, I very much recommend it.
> I've also recently heard Elon Musk being described as a "literal Nazi" a lot
The chatbot run by his company has called itself 'MechaHitler' multiple times, and Musk himself did what is widely regarded as a Nazi salute during Trump's second inauguration.
I guess to expand on my earlier post, it's generally a good thing that these kinds of absurd allegations are counterproductive and generally not convincing, because if they were more readily believable they would be quite irresponsible. Real violence has occurred because of beliefs (and posts) like this.
I've had great fun using Phyphox to visualise my hand getting closer/farther from my phone based on the presence of my magnet implant. So many cool little things the app can visualise and measure, especially when used it creative ways.
I'm still completely in love with WebOrigami (https://weborigami.org). It is a 'dialect of JavaScript' that is designed for building static sites. It isn't super popular, but it much more flexible and comprehensive than anything else I've found. Fills the 11ty gap nicely.
I feel it my due diligence to advise you that LessWrong is closely associated with the rationalists and all that comes with them. It is worth doing a tad of research into the rationalist community and Eliezer Yudkowsky before going too deep down the LessWrong rabbit hole. https://en.wikipedia.org/wiki/Rationalist_community
I found it pretty boring, to be honest. Felt like an excuse for the author to show off how smart they thought they were, without really any skill at characterization or meaningful plot.
And like, I'm not a writing snob. I read fanfic by amateur authors. But HPMOR just doesn't do much of anything interesting.
There's lots of valid critiques of HPMOR (I recently reread it and the early chapters are painfully obnoxious), but I think "no meaningful plot" and "doesn't do anything interesting" are objectively false. It has like a dozen interacting plotlines, and is massively different from any other HP fanfic and most media in general. It is popular for a reason. If you dropped it early, I'd encourage you to try again.
>Writing in Asterisk, a magazine related to effective altruism, Ozy Brennan criticized Gebru's and Torres's grouping of different philosophies as if they were a "monolithic" movement. Brennan argues Torres has misunderstood these different philosophies, and has taken philosophical thought experiments out of context.[21] Similarly, Oliver Habryka of LessWrong has criticized the concept, saying: "I've never in my life met a cosmist; apparently I'm great friends with them. Apparently, I'm like in cahoots [with them]."
The people that coined TESCREAL seem to not really be related to rationalists, and seem to have coined a term for "those vaguely related ideas from people that do some stuff we consider wrong and we consider bad". "Evil people from San Francisco" could work just as well I think.
And wait, shouldn't I beware arguments for utilitarianism rather than against? If that's what you meant yeah I agree, especially pushed to the extreme it leads you in very weird places.
I'm not sure what the parent meant by "beware arguments against utilitarianism" - there is nothing wrong with arguing for or against utilitarianism. It's a popular moral philosophy.
You should beware of bad utilitarian arguments though, which is where you often get the real "gotta break a few eggs to make an omelette" kind of arguments that justify all manners of atrocity in service of some narrow hypothetical future good.
Like when Marc Andreessen says we should consider anyone who would do something to slow down or regulate AI advancement murderers of future humans. Bad utilitarianism right there.
Proper utilitarians are concerned with the net difference between all positive and negative consequences of actions.
People often think they have mic-dropped utilitarianism by saying things like, "Oh, so if two people get a lot of joy by beating up a third person, that is ethical because it is overall net positive?"
A few things wrong with that. First is there is no net happiness formula which utilitarians are proposing. Peter Singer has said more than once that he weights suffering far, far higher than happiness.
Second is that every ethical system has screw cases which make the system look messed up. "Do unto others..." it terrible if you are talking about masochists.
I know nothing about the drama, but treating "utilitarianism" as if it is one thing or that a particular person or group's position is identical to utilitarianism seems ironic in the context. It is like claiming all pizza is bad because I went to dominoes and didn't like the experience.
I would change your "but" to "and"? But maybe my comment wasn't very clear. The dangers of phone typing.
I meant that one should acquaint oneself with the criticisms of utilitarianism, if one wants to understand what it is people react to in rationalism and related communities.
Like any group of humans, there are power structures and edge cases that can lead to horrific outcomes. Giving the person that posted the warning the benefit of the doubt, I think what they are saying is that "Rationalist does not necessarily mean positive for humanity, nor even no harm for humanity". This holds for all religions and religion-like movements, of which Rationalism, in this sense, is one.
I think this holds for basically all movements in which case, I don't really understand the need to flag that website over any website.
Edit: though I don't want to diminish that this specific group is a cult with classical cult techniques like sleep deprivation and with the disastrous consequences often associated with cults.
> They claim to practice unihemispheric sleep (UHS), a form of sleep deprivation intended to "jailbreak" the mind, which they believe can enhance their commitment to their cause.[
> I think this holds for basically all movements in which case, I don't really understand the need to flag that website over any website.
Because it uses the language and thinking patterns of the Rationalists, it serves as a strong indoctrination tool. The site itself isn't bad but, as someone who flirted with those communities as a result of the site, I think the warning is deserved.
If you feel like the warning is deserved, and have personal experience, yeah then fair enough. I personally live in the "periphery" (read: not in Berkley or SF or New York) so I think I may see way less of the good and the bad.
It's a particular variety of "everyone else is wrong (and maybe a bit stupid)".
Like, sure, sometimes you get popular nonsense like recovered memories or accidental fires can't be as hot as intentional fires or shaken-baby syndrome or bite-mark analysis. But a lot of times, everyone isn't wrong and you've just overlooked something critical or misdefined the problem.
> Like, sure, sometimes you get popular nonsense like recovered memories or accidental fires can't be as hot as intentional fires or shaken-baby syndrome or bite-mark analysis. But a lot of times, everyone isn't wrong and you've just overlooked something critical or misdefined the problem.
The older I get, the more I find that everyone is wrong. It's fucking astounding how much stuff either was never actually checked, or is true only under very select circumstances with those caveats being widely ignored. For example at work right now we have been using a test for 40 years that was developed around the idea that our product absorbed air - chemical variation would lead to extreme differences in results and you can't retest an item for at least 24 hours because it will still be affected. Turns out that none of that was true, all the error we were getting was from temperature change, the items can be retested after 45 seconds. 40 years and no one took 30 minutes to verify this claim which costs us millions of dollars per year. And this is just the example from this past week. I've probably seen several hundred such cases of completely unjustified claims being treated as gospel truth.
I can't speak for the countless things I've never tested, but if nearly everything I do test is wrong, across numerous fields full of very intelligent people, it doesn't give me much confidence about everything else. We live in a world that values simplicity and confidence, not nuance and rigorous verification. I've gotten to the point where I don't trust anything without verification, not even my own past work.
> It's a particular variety of "everyone else is wrong (and maybe a bit stupid)".
Gestures at the current state of the world
Not that adopting rationalist modes of thinking will fix the problem, of course. Teach rationalist principles to an idiot and you will have a slightly more rational idiot, who will reason himself into absurdity. Teach them to a manipulative, amoral psychopath and you will have a more skillful manipulator.
Rationalist principles and methods provide superior tools for thinking through some complex problems, but they say nothing about foundational ethics (other than pointing out possible sources for the many different systems of ethical beliefs). And they cannot be wielded effectively by people who lack the ability to decouple, to think abstractly, or to create extended “chains” of thoughts and keep them in working memory.
One should be suspicious of anyone who claims that rationalism is a panacea, or alternatively that it is somehow a problem per se. It’s a neutral set of tools, a community who wants to improve those tools, and a small group hanging off the edge who have unrealistic and/or harmful views of how those tools should be applied. Unfortunately this third minority is presented by anti-rationalists as the core of rationalism. In reality, they are easily avoided unless you hold the same core values.
(I say this as a long time observer who appreciates their work but does not consider myself a part of the “rationalist” community.)
One can take many good ideas from less wrong, yudkowsky et al without adopting a rationalist identity. Sure, it attracted wackos and they made a cult or two. But that could happen to anyone!
It is entirely plausible they were just experimenting with AI tooling to better understand how to use it and what it is capable of. Their saying, 'Despite not knowing this particular programming language or framework.' indicates to me this is probably the case.
Nope. I've been working on this project for a couple of days now and things were mostly going well. A significant portion of the mvp backend and frontend was built and working. Then this one seemingly simple bug appeared and just totally stumped both Codex and Claude Code.
There was even another UI component (in the same file) which was almost the same but slightly different and that one was correct. That's what I copy pasted and tweaked when I fixed the problem. But for some reason the models were utterly incapable of making that connection.
With Codex and Claude Code I thought maybe because these agentic coding tools are trained to be conservative with tokens and aggressively use grep that they weren't looking at the full file in one go.
But with Gemini I used the web version and literally pasted that entire file + screenshots detailing what was wrong (including the other component which was rendering correctly) and it still couldn't solve it. It was bewildering.
I had the exact same issue, I had a UI scrollbar bug that claude couldn't fix, it tried 4-5 different ideas that it was sure was causing the issue, none of them worked.
Tried the same with codex, it did a little better but still 4x times around.
This is with playwright automation, screen shots, full access, etc..
If anyone has been thinking of building something in the Jellyfin ecosystem, I very much recommend it.
[1]: https://github.com/DeclanChidlow/KOReader-Jellyfin-Plugin/
reply