But one of the reasons they switched was because the compiler upstream for the original language they used, Zig, wouldn't accept slop contributions they wanted to make for Bun perf. What will they do when they need to try to push a slop contribution upstream to rust?
At this point they will probably just fork yet again and maintain some vibe compiler.
No, they've explicitly denied it.[0] However, they do regularly dig at how much faster their fork is[1][2] that they can't merge because of Zig's AI policy.
Huh. I wonder if the original intent was to merge an AI generated PR to a high-profile project like Zig. It makes the headlines and generates hype. But that went embarassingly bad for them so they had "port Bun to Rust" as a backup.
They should make FullstackLang. It compiles English in .md to machine code that can directly run on the specialized hardware it designs for it that you have to 3d print at runtime. Every program gets its own custom hardware. Composability and reuse be damned. Pay the token masters for every thought you have
Funnily enough, iirc Foucault viewed a large part of his work as extending the style of investigation Neitzsche initiated in Genealogy of Morals.
I think Nietzsche is great. His prose is a breath of fresh air and he's arguably the greatest literary stylist among philosophers since the Greeks. Sartre was pretty good too, likely thanks to his ability as a novelist. Some later continental philosophers would have really benefited from reading his aphorism that good writers write to be understood.
I have a lot of issues with latter 20th cen continental (particularly french) philosophers, but of all of them Foucault is the last one anyone should have an issue with. While he's guilty of some of the pompous and needlessly intelligible stylistics this crew adopted, he at least has some pretty substantial ideas behind his work. Derrida and Lacan on the other hand....
As far as sociology goes, I think you probably realize claiming an entire field is bunk is dumb. In fact you are committing the very wrong you are apparently complying about (writing off the field of developmental psychology). I haven't heard of. a single beef between these two fields btw, must have been an odd textbook.
What's insubstantial about Lacan? There seems to be enough in there to give D&G, Zizek etc careers (as well as inspire a bunch of compelling media i.e. The Matrix)
All great things, but helping other philosophers have careers and inspiring cool art does not necessarily imply that you yourself had substantial philosophical concepts.
I'm aware: I was inviting your thoughts (this is a cramped space for a debate). For mine, reading Lacan directly is a chore (post-atom bomb academic arts can have a "use all the cool sciencey words" problem), but I find the Other/other and reworking of Freud's ego id etc thought-provoking.
Biopower is the most famous one, but I actually think his greatest contribution was to make philosophers pay more attention to the ways in which epistemic systems and ways of organizing knowledge are connected to political power.
I actually think his phd thesis "the history of madness" is his best work. It encapsulates much of the subject matter that would occupy him (knowledge and power) in a domain that's easier to understand than some of his later arguments, and it predates his adoption of a more contorted literary style (or maybe the translation is just better, idk).
Ian Hacking also has a great text that extends Foucault's work "Historical Ontology" that picks up many of the chief ideas in a far more lucid manner for those of us who aren't fans of the later continental style (which if I'm being honest, was always a little too concerned with being obtuse just to sound intelligent)
Yeah, believe me, I'm not a fan of the misapplication and misunderstanding of much of this work. It's a bitter lesson in why making one's ideas clear in straightforward prose is so essential. I think Foucault at least, could be absolved from the notion that he intended any kind of said misapplication. Some other philosophers however, I think we're just straight up hacks that exploited the vogue of confusing language and weak metalingual philosophizing (Derrida, coughcough). If only we got students to read Wittgenstein first and save them from all the sophist language games.
Just to volunteer an example: An editorial by Assange that explicitly called out the panopticon was on my mind today.
The rise of "meta glasses" and reading ICE also wishes to employ them was what reminded me of this I believe.
Sociology (like philosophy, like math) is one of these subjects were a good teacher makes all the difference. I guess true of any subject. Many people come away from a subject thinking it's bunk or not relevant to them for all sorts of reasons. Teachers are not meant to be babysitters or proctors they are supposed to offer context and connect the dots.
The amount of bad teachers (it is a hard job) is quite staggering. Education is in large part a mess because we've tried to scale a system that was designed for the very few to the very many without the proper investment.
The term/design is not his but he offered an interpretation or extrapolation inspired by the concept that was unique enough that if you mention panopticon in the context of Foucault it means something unique.
Sure, but you need to consider that, in this case we are talking about the language runtime. It isn't just some other library dep. It's basically the base layer of the stack. It has a huge blast radius. It is, imo, a nontrivial decision to swap runtimes. If problems emerge you can't easily plug some other runtime, that's a major technical decision and should be treated as such.
In the past at least you could assume the maintainers of the runtime had some kind of mental model of how it worked. In my view, with the way this rewrite has been approached, you can't assume that at all. It's good the test suite passes, but who knows how this will affect the evolution of the codebase? Do we even know if the code is good? How much is just slop? Tests do not test architecture. Is this new rewrite even going to be maintainable? How is the team going to get up to speed on a new codebase in a new language that the main author presumably doesn't even fully understand?
There are many reasons to be concerned. Treating this as no big deal would make me question one's ability to make assessments of technology. There's a world of difference between relying on gen AI heavily in products and leaf nodes of the stack, using it in a purely assistive way, and using it to drive a massive scale rewrite of a base component in a language the maintains team has an unproven amount of experience with. From a reliability standpoint the way this project was executed is completely preposterous, and it's very clearly a marketing stunt more than a sound technical decision on how to drive a project. It's not about the use of LLMs, it's about thee stupid and blatantly obvious generation of cognitive debt all to help sell claude. I'd have way fewer qualms if they used LLMs to do a rewrite in a way that retained developer understanding (i.e. not driven by one person and in such a short timespan that having a robust mental model, even for that person, is highly unlikely)
You're implying that reckless rewrites within the JS ecosystem are a novel event, or more specifically that surprise language changes over a short period of time are. And yet... I can think of at least six times in which exactly this has happened and little fuss was made because the polarizing element of "AI" was not involved. Not just JS to Typescript, but to Dart, Go, C, Rust, Zig, Nim etc.
From any reasonable perspective, this is business as usual in the house of cards we all operate in. Perhaps the sensationalization would be justified if the lang migration wasn't one of less correct -> enforced correctness by default?
To your point in general about maintainers holding a mental model of the runtime: I would challenge that to say that it is very likely that there is no developer who holds a complete mental model of an entire runtime at any given point. As with anything of this scale you understand individual parts in their entirety and have general assertions about the rest until specifically revisited, even if you are the sole developer. In this case specifically, Bun has been largely AI driven for quite a while anyway so it is even more unlikely that the developers ever had a complete picture in the first place. If you trusted them before, then nothing has changed.
It's not lost on me that code logic can be subtly incorrect even as tests are passing either, but there isn't exactly a lot of grey area in this particular context. Does your code compile or not? If it builds as expected, then your own unit tests will highlight the difference.
Developers use LLMs to migrate a million line codebase to a language that they have much less experience with in such a short amount of time that they likely do not have a good mental model of the migrates code.
At least the tests pass.
Only one person drove the migration, so the number of people that understand the new code is ~0.5 under the assumption there's no way the sole dev could build a mental model of fresh 1m code in 6 days.
This is code for a language runtime.
It's great that the tests pass but it's really hard for me to interpret this as anything other than horrible mismanagement of a promising project. When you sit this low in the stack this is grossly irresponsible and I have no idea why anyone would use Bun after this. You'd be literally adopting a runtime the devs presumably don't understand, keep in mind they now somehow need to evolve and maintain this in the future.
Hopefully this remains an experiment, or Bun has some plan for re-upping dev knowledge of the codebase. Sorry but a component with massive blast radius like a runtime isn't really a good candidate for vibe coding, no matter how good the AI is. I'd like the maintainers to actually understand their runtime, thanks.
They won't, they will continue to vibe code it until it collapses under them and the project fades into obscurity. Which it will regardless since it was acquired by Anthropic.
I feel like most of these applications all boil down to "Obsidian but with AI integration baked in up front". It'd be interesting to see approaches that actually rethink commonplaces of the experience (graph view etc) rather than just reproduce the same thing but "with ai"
> I notice the word "literacy" thrown around a lot lately, in part by myself, but there's an inherent dishonesty to this. Language does not have absolute meaning, and you cannot read another person's mind. Just because you interpret literal works of art differently, I don't believe that necessarily qualifies you as illiterate. These are not the same qualities.
I interpret critiques of this flavor and the "literacy" issue in general as being about a lack of interpretive range more than a tendency to produce some "incorrect" interpretation.
I think people are concerned that declining sophistication in readers actively prevents them from even being aware that some more complex interpretation is possible when engaging with a text. People read the overwhelming lack of nuance in internet comment threads as evidence of this. You can question whether or not that's a legitimate inference in isolation (I think it's dubious, personally), but, when bolstered by evidence from studies about how much people read for leisure, and falling grades on reading comprehension exams, I think the argument gains a little more weight.
I don't necessarily take the author as saying that these commentators are wrong about the NYT's author's self-awareness, but rather that evidence of a more complete reading would evidence itself in the comments if they had a more nuanced interpretation. There's a difference between flat out saying "wow this article really makes the writer look like a horrible person" and "I'm glad the writer had the courage to share this and seems o be growing but I'm amazed they were ever such a horrible person at some point in their lives". Again, it's probably unfair to make a judgement about overall interpretive ability based on one comment alone—one would actually need to subject the commenter to reading comprehension exams to know, but if you do feel the extrapolating judgement to population tendencies is legitimate, I understand why you might draw literacy conclusions.
I agree. It's more that I find it difficult to fault people for not recognizing motifs that do not actually speak to them, possibly even after a ton of exposure and instruction, or given a specific context. At that point, it's less the audience being medium illiterate, and more the medium being audience illiterate so to speak. Or rather the creator being medium or audience illiterate.
This is completely true. From what ai can tell talking to people outside of tech, the agi and "omg this stuff is wild" hype and fears have completely dissipated. Ironically the average person sees these tools how typically you'd expect a cold, rational technologist to see them: just another tool.
I think a lot of people are just getting their firs taste if agent harnesses plus slightly better models right now, and yes, the first time you use them it seems scary and amazing. By the hundredth time though, it's very apparent that there is still tremendous work to do before any kind of fully automated software pipeline (let alone any other domain) can be realized.
False Consciousness was the old marxist term for this inadvertent working against your own ultimate self-interest. It's rife in capitalism. If you look closely you'll see it everywhere.
(note that even the "her kids will be ok" isn't true at the limit. If wealth concentrates sufficiently enough it will lead to societal collapse)
At this point they will probably just fork yet again and maintain some vibe compiler.
reply