Hacker News .hnnew | past | comments | ask | show | jobs | submit | johnsmith1840's commentslogin

How much energy does a human + work enviroment cost vs an LLM call?

Human driving into work? Heating/cooling?

Wonder why big AI hasn't sold it as an enviromental SAVING technology.


After AI tech matures more, we will be able to save EVEN MORE energy by eliminating all the people from the environment.

That's the point? I agree and roughly it's one of two.

A: you made this as a free gift to anyone including openai B: you made this to profit yourself in some way

The argument he makes is if you did the second one don't do opensource?

It does kill a ton of opensource companies though and truth is that model of operating now is not going to work in this new age.

Also is sad because it means the whole system will collapse. The processes that made him famous can no longer be followed. Your open source code will be used by countless people and they will never know your name.

It's not called a distruptive tech for nothing. Can't un opensource all that code without lobotomizing every AI model.


Companies competing to buy ad space and SEO of every website on too of it.


Because it's a fantasy for an unknown amount of time. 1 year? 10? 50? Never? There hasn't been a single proper breakthrough in continual learning that would enable it. Anyone that studies CL will also get super pissed at it the problem and solution counteract each other to our current understanding but a fruit fly does it no problem!


Echoing the other comment they showed another big thing which is that the output if an AI model is the AI model. If you mass prompt scrape their AI you can recreate it almost exactly.

Very dangerous if you think about it that the product itself is the raw building block for itself.

Openai spends 1B$ on their model, releases it and instantly it gets scrapped by a million bots to build some country or company their own model.


I did a lot of playing around early on for this with LLMs.

Some early testing I found that injecting a "seed" only somewhat helped. I would inject a sentance of random characters to generate output.

It did actually imrpove its ability to make unique content but it wasn't great.

It would be cool to formaile the test for something like password generation.


I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.

It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.

I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.


Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.


> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.

It comes from the company best equipped with capital and infra.

If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.

This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.


Yes scaling is always capitol hungry but the innovation itself is not


All of moltbook is the same. For all we know it was literally the guy complaining about it who ran this.

But at the same time true or false what we're seeing is a kind of quasi science fiction. We're looking at the problems of the future here and to be honest it's going to suck for future us.


Terrifying concept this is literally saying if AI was legal we'd have an absolute rigid dystopia


And this was just about how to decide an auto accident case. With the experiment varying the circumstances.

My summary is still: seasoned judges disagree with LLM output 50% of the time.


I don't think the elite think all voters are dumb more like they think they're easy to manipulate to vote for something (which is largely true). Anecdotally I easily get manipulated by the type of information I consume. I occassionally catch it after the fact or a conversation with others but there's no telling how much I've just accepted that's manipulated.

From that angle it's a game of who has the money, power, and diatribution to enact this manipulation.

Twitter being a prime example. Is Elon "right"? Maybe but the main point is it doesn't matter as he has the distribution.

If you have money but low to no distribution -> you do what gary is doing. Maybe he'd be interested in removing rights to vote but someone like Zuck would NOT because he has outsized ability to influence as he sees fit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: