If we know the outcome of that code, such as whether it caused bugs or data corruption or a crappy UX or tech debt -- which is potentially available in subsequent PR commit messages -- it's still valuable training data.
Probably even more valuable than code that just worked, because evidently we have enough of that and AI code still has issues.
Seemed like satire to me. The author is pretty intentionally oblivious, and even suggests that other people see flaws in their writing that they don’t see. Also, the most egregious errors are right after comments about grammar and such.
I submit that there is a difference between me and a corpse. Or between a steak and a cow in the field.
"Well, okay, you're just (living) flesh on bones." There's a difference between me and a zombie (or, if you prefer, brain-dead me). There's a difference between me and lab-grown organs [1], or even between me and my kidney cut out of me.
> It’s not even an interesting question.
Consciousness is an active area of research (ergo, interesting enough for some people to devote research to it): biologically [2] and philosophically [3].
Unless you enjoy nihilism, there are some serious problems with materialism (that is, matter is all that there is), which we are encountering. There are also some philosophical problems with it; a cursory search turned up this journal article [4].
The point is that if we're simplifying LLMs to being "just" a bag of math and can discard because of that, then humans are also "just" a bag of meat and can similarly be discarded. Somewhere in that bag of math, LLMs take on properties that some people find hard to simply dismiss because it is based on matrix multiplication. It's an oversimplification, and if you oversimplify, you lose resolution.
> individual contributors are evaluated by their execution on deterministic tasks.
Ha! Apparently the author hasn't been asked "how long will it take to code this?" yet... And isn't a common developer complaint that management does not know how to evaluate them, and substitutes things like how quickly a task gets completed, with the result that some guy looks amazing while his coworkers get stuck with all his technical debt?
> My capitalist side says it's always the product not the process.
Your capitalist side needs to read some Deming. "Your system is perfectly tuned to produce the results that you are getting." Obviously, then, if you want better results, you need to improve your system.
Also "the product" is ambiguous. Is it the overall product, like how the product sits in the market, how the user interacts with it to achieve their goals, the manufacturability of the product, etc.? That is Steve Jobs sort of focus on the product, and it is really more of a system (how does the product relate to its user, environment, etc). However, AI doesn't produce that product, nor does any individual engineer. If "the product" means "the result of a task", you don't want to optimize that. That's how you get Microsoft and enterprise products. Nothing works well together, and using it is like cutting a steak with a spoon, but it has a truckload of features.
I think adding new features is exactly the sort of place where AI is terrible, at least after you do it for a while. I think it's going to have a tendency to regenerate the whole function(s), but it's not deterministic. Plus, as others have said, the code isn't clean. So you're going to get accretions of messy code, the actual implementation of which will change around each time it gets generated. Anything not clearly specified is apt to get changed, which will probably cause regressions. I had AI write some graphs in D3.js recently, and as I asked for different things, the colors would change, how (if) the font sizes were specified would change, all kinds of minor things. I didn't care, because I modified the output by hand, and it was short. But this is not the sort of behavior I want my code output to have.
I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.
Skill issue. If you just one-shot everything, sure, you'll get a messy codebase. But if you just manage it like a talented junior dev, review the code, provide feedback, and iterate, you get very clean code. Minus the arguing you get from some OCD moron human who is attached to their weird line length formatting quirk.
Wow, 48k for $14000. Now you can get a MBP with a million times more memory for $3500 or so. Whereas that CPU was clocked at 1 MHz, so CPUs are only several thousand times faster, maybe something like 30,000 times faster if you can make use of multi-core.
I think the reason that I like dark mode is that I have had floaters in my eyes since at least age 14. They stand out against a bright white window background, but I don't notice them at all on a dark window with light text.
Or maybe it's just because that's how IBM PC DOS, BASICA, etc., as well as the VT100, VT220, VT300s that I used did it.
(Also, I think displays should paint with light, and having a white background is painting darkness on a computer screen. It's particularly bad for presentation slides. A light background just screams "PowerPoint presentation".)
Not to disagree with you, but in the case of Civilization, I do find it addicting in both senses. It is one of two games that I just cannot play, because I will be up until 3am playing. (Puzzles and Dragons was the other one, I think I had to uninstall it the day after I downloaded it)
reply