> At the moment, sure. They've only been available for about 5 minutes in the grand scheme of dev tools. If you believe that AI assistants are going to be put back in the box then you are just flat out wrong. They'll improve significantly.
I am extremely skeptical that LLM-based generative AI running in silicon-based digital computers will improve to a significant degree over what we have today. Ever.
GPT-2 to GPT-3 was a sea change improvement, but every since then, new models are really only incrementally improving, despite taking exponentially more power and compute to train. Coupled with the fact that processor are only getting wider, not faster, or less energy consuming, then without an extreme change in computing technology, we aren't getting there with LLMs.
Either the processors or the underlying AI tech need to change, and there is no evidence this is the case.
> The belief that AI is worthless unless it's writing the code that a good dev would write is a trap that you should avoid.
I have no idea what you're even trying to say but this. Is this some kind of technoreligion that thinks AGI is worth the endeavor regardless of the harm that comes to people along the way?
Same. I've heard that this area is improving exponentially for years now, but I can't really say the results I'm getting are any better than I originally experienced with ChatGTP.
The dev tooling has gotten better; I use the integrated copilot every day and it saves me from writing a lot of boilerplate.
But it's not really replacing me as a coder. Yeah I can go further faster. I can fill in gaps in knowledge. Mostly, I don't have to spend hours on forums and stack overflow anymore trying to work out an issue. But it's not replacing me because I still have to make fine-grained decisions and corrections along the way.
To use an analogy, it's a car but not a self-driving one -- it augments my natural ability to super-human levels; but it's not autonomous, I still have to steer it quite a lot or else it'll run into oncoming traffic. So like a Tesla.
And like you I don't see how to get there from where we are. I think we're at a local maxima here.
To use an analogy, it's a car but not a self-driving one -- it augments my natural ability to super-human levels; but it's not autonomous, I still have to steer it quite a lot or else it'll run into oncoming traffic. So like a Tesla.
And like you I don't see how to get there from where we are. I think we're at a local maxima here.
To continue the car analogy - are you really suggesting we're at 'peak car'? You don't believe that cars in 20 years time are going to be significantly better than the cars we have today? That's very pessimistic.
There was a meme from back when - "if cars advanced like computers we'd be getting 1000 miles per gallon by now".
Thinking back to the car I had 20 years ago, it's not all that different from the car I have now.
Yes, the car I have now has a HUD, Carplay, Wireless iPhone charging, an iPhone app, adaptive cruise control, and can act as a wifi hotspot. But fundamentally it still does the same thing in the same way as 20 years ago. Even if we allow for EVs and Hybrid cars, it's still all mostly the same. Prius came out in 2000.
And now we've reached the point where computers advance like cars. We're writing code in the same languages, the same OS, the same instruction set, for the same chips as we did 20 years ago. Yes, we have new advancements like Rust, and new OSes like Android and iOS, and chipsets like ARM are big now. But iPhone/iPad/iMac, c/C++/Swift, OSX/MacOS/iOS, PowerPC/Intel/ARM.... fundamentally it's all homeomorphic - the same thing in different shapes. You take a programmer from the 70s and they will not be so out of place today. I feel like I'm channeling Bret Victor here: https://www.youtube.com/watch?v=gbHZNRda08o
And that's not for lack of advancements in languages, Os, instruction sets, and hardware architectures, it's for a lack of investment and commercialization. I can get infinite money right now to make another bullshit AI app, but no one wants to invest in an OS play. You'll hear 10000 excuses about how MS this and Linux that and it's not practical and impossible and there's no money in it, so on and so forth. The network effects are too high, and the in-group dynamic of keeping things the way they are is too strong.
But AGI? Now that's something investors find totally rational and logical and right around the corner. "I will make a fleet of robot taxis that can drive themselves with a camera" will get you access to unlimited wallets filled with endless cash. "I will advance operating systems past where they have been languishing for 40 years" is tumbleweeds.
The cars we have today are not so far off from the cars of 100 years ago. So yes, I highly doubt another 20 years of development, after all the low hanging fruit has already been picked, will see much change at all.
"after all the low hanging fruit has already been picked"
Another great analogy. LLMs allow us to pick low hanging fruit faster. If we want to pick the higher fruit, we'll need fundamentally different equipment, not automated ways to pick low hanging fruit.
For what its worth I agree quite strongly with you and moron4hire in this thread
I wanted to make an observation that what you two are describing seems to me like it maps onto the Pareto principle quite neatly
LLMs seem like they have rapidly (maybe exponentially) approached the 80% effectiveness threshold, but the last 20% is going to be a much higher bar to clear
I think a lot of the disagreement around how useful these tools are is based around this. You can tell which people are happy with 80% accuracy versus those with higher standards
I think a lot of people use forums and stack overflow to find information about a problem they are facing. But you're right, specs, docs, and code are other sources that I hadn't mentioned. I find that forums and stack overflow have utility beyond the documentation, because the docs don't contain information about issues people have faced in the past and accounts about how they have been solved.
But the same idea applies to specs/docs -- instead of reading the specs or the docs, I'd rather be talking to a LLM trained on the specs and docs.
There was a point where I started to dive more into specs, docs, and code, and less relying on finding the answer I'm after.
But in defence of those spending hours on forums, sometimes a project is not well documented, and the code isn't easy to read. In those situations though, I'll be browsing their Github issues or contacting them directly.
Often though I find it highly valuable to go read the docs, even if an LLM has given me a working example. Sometimes, I find better ways, warnings, or even information on unrelated things I want to do.
I have no idea what you're even trying to say but this. Is this some kind of technoreligion that thinks AGI is worth the endeavor regardless of the harm that comes to people along the way?
I'm saying there is a lot of value in tools that are merely 'better than the status quo'. An AI assistant doesn't need to be as good as a dev in order to be useful.
These tools don't behave like assistants though, despite being advertised as being assistants
They turn you into the assistant instead of the programmer in the driver seat
Whenever you use them, you have to shift "code review mode", which is not the role of the primary programmer on a task, it is the role of a secondary programmer doing a PR
It's "a lot of value" if you like being the assistant to a very inconsistent junior programmer
I am extremely skeptical that LLM-based generative AI running in silicon-based digital computers will improve to a significant degree over what we have today. Ever.
GPT-2 to GPT-3 was a sea change improvement, but every since then, new models are really only incrementally improving, despite taking exponentially more power and compute to train. Coupled with the fact that processor are only getting wider, not faster, or less energy consuming, then without an extreme change in computing technology, we aren't getting there with LLMs.
Either the processors or the underlying AI tech need to change, and there is no evidence this is the case.
> The belief that AI is worthless unless it's writing the code that a good dev would write is a trap that you should avoid.
I have no idea what you're even trying to say but this. Is this some kind of technoreligion that thinks AGI is worth the endeavor regardless of the harm that comes to people along the way?