I ve just vibed for 2 weeks a pretty complex Python+Next.js app. I've forced Codex into TDD, so everything(!) has to be tested.
So far, it is really really stable and type errors haven't been a thing yet.
Not wanting to disagree, I am sure with Rust, it would be even more stable.
What will you use for dependent types, Idris 2? Lean? None are as popular as Rust especially counting the number of production level packages available.
This is quite sad to see someone react to a comment they disagree with by assuming that different opinion is paid for. I'd love it if you dug into my comment history and found even a shred of evidence that I'm being paid to talk positively about my programming language of choice.
All comments are paid for in some way, even if only in "warm fuzzies". If that is sad, why are you choosing to be sad? But outlandish comments usually require greater payment to justify someone putting in the effort. If you're not being paid well, what's the motivation to post things you know don't make any sense to try and sell a brand?
No, unless you mean the problem of over-engineering? In which case, yes, that is a realistic concern. In the real world, tests are quite often more than good enough. And since they are good enough they end up covering all the same cases a half-assed type system is able to assert anyway by virtue of the remaining logic needing to be tested, so the type system doesn't become all that important in the first place.
A half-assed type system is helpful for people writing code by hand. Then you get things like the squiggly lines in your editor and automated refactoring tools, which are quite beneficial for productivity. However, when an LLM is writing code none of that matters. It doesn't care one bit if the failure reports comes from the compiler or the test suite. It is all the same to it.
I suspect a more general and much more clever learning algorithm will emerge by then and will require less training data to get to a competent problem solving state faster even with dirty data. Something able to discriminate between novel information and junk. Until then I think there will be a quality decline after a few more years.
How will it emerge? In the past we've been told that the a(g)i will write itself, rapidly iterating itself into a super intelligence that handily solves all our current and future problems, but it's beginning to look like a chicken or the egg scenario.
Living systems were able to brute force their way to human brain, but it took billions of years and access to parallel processes that make the entire collective history of human computation seem like a mote to a star.
What novel spark do you see accelerating this process to such a hyperbolic extreme?
I would imagine a trajectory similar to AlphaGo, it starts out trying to replicate humans and then at a certain point pivots to entirely self-play. I think the main hurdle with llms, is that there isn't a strong reward target to go after. It seems like the current target is to simply replicate humans, but to go beyond that they will need a different target.
I agree in general, but defining an appropriate target seems intractable at the moment. Perhaps it is something the AIs will have to define for themselves.
I think real intelligences are working with myriad such targets, but an adversarial environment seems essential for developing intelligence along this axis.
I do think if there's a path to AGI from current efforts it will be through game play, but that could just be the impressionable kid who watched Wargames in the 80s speaking through me.
It took a billion years to get to the tool-making state, and then less than a 1000th of that time to making CPUs. Then a 1000th of that time to make LLMs. We are in a parabolic extreme
This is begging the question. What evidence is there that this is all the same "stuff" driving towards some future apex? What does it mean to "get to" the tool making state outside of a Civ-style video game?
Sorry but for $5 in credits you can have an agent port over all your bullshit to the next fad. I'll have one port over all my bullshit when the time comes too.
Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.
Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.
Were they just piling up cash in the parking lot to set it on fire?
At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.
Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.
Even within the XR industry, we had no clue where all that money went. During the metaverse debacle, the entire industry stagnated. Once metaverse failed, XR adjacent shops started to fail. There was no hardware or technique innovation shared with the rest of the industry, and at the time the technology was pretty well settled.
Since then we lost all the medium players and it's basically just Facebook, Valve, and Apple.
Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.
I have a family member that has struggled to hold jobs and keep relationships, and seems to have had a learning disability their entire life, and potentially had brain damage early in childhood (due to lack of oxygen at birth, and being tasked with burning the family's trash out back (self-reported)).
I recently moved them into my house because they had nowhere to go and they are now completely obsessed with preparing elaborate dishes, growing fermented foods and carefully curating their microbiome (by doing dubious stuff they find on YouTube). This syndrome feels very much like what this family member is doing but the rarity makes me think it's probably not this.
It's to the point where their room is full of jars of nuts, fermented items, and they go into detail describing the pleasure of the textures/tastes of food and how much of a treat it is for them to enjoy random everyday things normal people eat.
My wife and I physically cringe when they talk about it in the kitchen given that they put more energy into this than seeing their own grandchildren, keeping jobs, or helping around the house.
Reading this, it's exceptionally confusing as to how many family members you're talking about exactly; Your use of they, them, their is causing me to imagine some microbiome obsessed tribe has moved in and is holding conferences on it in your kitchen...
No I found it readable though I can understand why they might be confused a little bit too but its nothing to worry about in my opinion.
Honestly regarding your anecdote and the gourmet syndrome the title itself, I don't really have nothing to add except I guess just note that human mind truly just works in remarkable ways.
But each day science uncovers more and more secrets about our brains. Maybe one day the gourmat syndrome or (your anecdote [if its a syndrome? or anything more observed or who knows, I am not sure as I don't have much medical knowledge being honest] might be explained in future too by future science advancements and scientists)
It's crazy how far we have come in medical science and (also not) [but I don't mean it in a bad way] at the same time.
Singular they/them is mostly only a problem to non-native speakers. It's a pretty uniquely English thing to blur the singular and plural, at least among languages that use pronouns much.
Yes. They know that most humans typically have poor impulse control, and are easily pulled off task and will fall into an addicting and lucrative loop. Makes perfect sense to show random unrelated shit.
I think the poster you're responding to is correct. I've seen it many times myself. And just so you know, asking for a piece of data and not getting it is not going to be proof that you're right.
No, but it will show, as someone else already responded, that they don't understand SO systems and processes at all. The question they linked [0] was closed by the asker themselves. It's literally one of the comments [1] on the question. Most questions aren't even closed by moderators, not even by user voting, but by the askers themselves [2], which can be seen on the table as community user. The community user gets attributed of all automated actions and whenever the user agrees with closure of their own question [3]. (The same user also gets attributed of bunch of other stuff [4]
This shows that critics of Stack Overflow don't understand how Stack Overflow works and start assigning things that SO users see normal and expected to some kind of malice or cabal. Now, if you learned how it works, and how long it has been working this way, you will see that cases of abuses are not only rare, they usually get resolved once they are known.
Statements like those are meant for you to just bounce off of them and fuck off. They don't care about the truth and they know you don't care enough to do anything about their lying. It's an entire business model.
- Rust: nearly universally compiles and runs without fault.
- Python,JS: very often will run for some time and then crash
The reason I think is type safety and the richness of the compiler errors and warnings. Rust is absolutely king here.
reply