every other article these days on this site is about AI. And it's incredibly tedious and annoying.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
The AI discussions can indeed be repetitive and tiresome here, especially for regulars, but they already seem to be downweighted and clear off the front page quite fast.
But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
> But it's a major focus of the industry right now, involving a genuinely novel and promising new class of tools, so the posts belong here and the high engagement that props them up seems expected.
In your opinion (and admittedly others), but that doesn't make the overhype any less tiresome. Yes it is novel technology, but there's alway novel technology, and it isn't all in one area, but you wouldn't know it by what hits the front page these days.
Anyway, it's useless to shake fists at the clouds. This hype will pass, just like all the others before it, and the discussion can again be proportional to the relevance of the topic.
I don't know about the professional professionals, but as a science professor, I have to wear a lot of hats, which has required me to gain skills in a multitude of areas outside my area of deep expertise.
I use Claude and Chatgpt EVERY DAY.
Those services help me run out scripts for data munging, etc etc very quickly. I don't use it for high expertise writing, as I find it takes more than I get back, but I do use it to put words on a page for more general things. If your deep expertise is programming, you may not use it much either for that. But man oh man has it magnified my output on the constellation of things I need to get done.
What other innovation in the last decade has been this disruptive? Two years ago, I didn't use this. Now I do as part of my regular routine, and I am more valuable for it. So yes, there is hype, but man oh man, is the hype deserved. Even if AI winter started right now, the productivity boom from Claude level LLMs is nothing short of huge.
Personal anecdotes on the benefits of using LLMs don't address complaints about tedious articles over-marketing AI tech. That LLMs provide benefits is well known at this point, it doesn't mean we can't recognize the latest hype cycle for what it is. There's a long list of previous technologies that were going to "change everything".
Yes, of course, but they almost always did too. Internet. Mobile Phones.
I think the issue is whether you think that HN posts on AI are basically marketing, or about sharing new advances with a community that needs to be kept on top of new advances. Some posts are from a small startup trying something, or from a person sharing a tool. I think these are generally valuable. I might benefit from a RAG, but won't build one from scratch. In terms of this crowd, I can't think of advances that in other areas that are as impactful as machine learning lately. Its not like crypto. Crypto was an interesting innovation, but one in which mostly sought a market instead of the a market seeking an innovation. There is no solid "just use a database" analogical response here like was the well used refrain to attempt at practical uses of cryptocurrency tech. Sure, AI companies built on selling something silly like "the perfect algorithm to find you a perfect date!" is pure hackery, but even at the current level of llm, I don't think we are any where near understanding its full potential/application. So even if we are on the brink of an AI winter, its in the Bahamas.
If HN readers feel that AI-related articles are showing up too much, then I'd say it would be on them to find articles on topics that interest them and post them to HN.
How does any of that apply to this particular article? Isn't a broader historical perspective exactly what's needed if you want to be free from the immediate hype cycle?
One of my biggest irritations with HN comment sections is how frequently people seem to want to ignore the specific interesting thing an article is about and just express uninteresting and repetitive generic opinions about the general topic area instead.
that's not really a justification in my view. The entire education industry is complicit in this circus. It's not just engineers hoping to get a payday it's academics too that are hoping to get funding and tenure.
I've worked in the analytics space for over ten years building what today is called "AI" as a service or product. The hype seems more like pent up release for the valid stuff, and block chain for the tech marketer type stuff.
Well AI probably is the future. Might not necessarily be LLMs (I personally don't rate LLMs) but enough people are interested in it nowadays that it's almost certain AGI will happen in our lifetimes.
Because I can't see current techniques for creating LLMs fixing the pre-training problem. Right now big tech companies are training LLMs on, well, pretty much all human knowledge ever assembled, and they're still pretty dumb. They're wrong far too often and they don't have the capacity to learn and figure out things with a limited amount of data as humans do. Also, it's pretty clear that LLMs are flatlining.
Now, they are good text interfaces. They're good for parsing and creating text. There even seems to be very, very basic thought and maybe even creativity (at a very, very basic level). At this point though, I can't see them improving much more without a major change in technology, techniques, something. The first time I saw them I thought they were just regression analysis on steroids, and not going to lie, they still have that vibe considering tech companies have clusters up to 350k H100s and LLMs still are dumber than the average person for most tasks.
I'm currently creating an app that uses an LLM as an interface and it's definitely interesting, but most of the heavy lifting of the app will be the functions it calls and a knowledge database since it needs to have more concrete and current knowledge. But hey, it's nicer than implementing search from scratch I guess.
And I'll be around explaining why it's a bad idea to stockpile $X00,000,000 worth of Equipment in Columbia, where coffee grows readily.
Capital intensive industries require low crime and geopolitical stability. Strongman politics means that investors who buy such equipment will simply be robbed at literal gunpoint by local gangs.
I feel like the raw numbers kind of indicate that the amount of money spent on training, salary, and overhead doesn't add up. "We'll beat them in volume" keeps jumping out at me.
What you're paying for ChatGPT is not likely covering their expenses, let alone making up their massive R&D investment. People paid for Sprig and Munchery too, but those companies went out of business. Obviously what they developed wasn't nearly as significant as what OpenAI has developed, but the question is: where will their pricing land once they need to turn a profit? It may well end up in a place where it's not worth paying ChatGPT to do most of the things it would be transformative for at its current price.
good point about the business model. probably AI has more even the ones reaping the rewards are only 4 or 5 big corps.
It seems with crypto the business "benefits" were mostly adversarial (winners were those doing crimes on the darknet, or to allow ransomware operators to get paid). The underlying blockchain Tech itself though failed to replace transactions in a database.
The main value for AI today seems to be generative Tech to improve the quality of Deepfakes or to help everyone in Business write their communication with an even more "neutral" non-human like voice, free of any emotion, almost psychopathic. Like the dudes who are writing about their achievements on LinkedIn in 3rd person, ... Only now it's psychopathy enabled by the machine.
Also I've seen people who, without AI are barely literate, are now sending emails that look like they've been penned by a post-doc in English literature. The result is it's becoming a lot harder to separate the morons, and knuckle-draggers from those who are worth reaching out and talking to.
+1. The other concern is that AI is potentially removing junior level jobs that are often a way for people to gain experience before stepping up into positions with more agency and autonomy. Which means in future we will be dealing with the next generation of "AI told me to do this", but "I have no experience to know whether this is good or not", so "let's do it".
My problem is the abuse of the term AI to a point where it has lost all meaning. I'd be all for a ban on the term in favour of the specific method driving the 'intelligence' as I would rule out some of qualifying simple because they are not capable of making intelligent decisions, even if they can make complex ones (looking at you random forest).
In the 1980s, AI was a few people at Stanford, a few people at CMU, a few people at MIT, and a scattering of people elsewhere. There were maybe a half dozen startups and none of them got very big.
Quite incorrect, even smaller colleges like in Greeley Colorado had Symbolics machines and there are threads of Expert Systems all throughout the industry.
The industry as a whole was smaller though.
The word sense disambiguation problem did kill a lot of it pretty quickly though.
Threads, yes. We had one Symbolics 3600, the infamous refrigerator-sized personal computer, at the aerospace company. But it wasn't worth the trouble. Real work was done with Franz LISP on a VAX and then on Sun workstations.
There were a lot of places that tried a bit of '80s "AI", but didn't accomplish much.
Isn't it enough that clueless marketers who get their Tech knowledge from businessinsider and bloomberg are constantly harping on about AI.
Seems we as a community have resigned or given up in this battle against common sense. Maybe long ago. Still there should be some form of moderation penalizing these shill posts that only glorify AI as being the future, ... the same way that not everything about crypto or the blockchain ended up on the FP. Seems with AI we're looking the other way and are OK with it?
Or maybe it's me.
reply