Hacker News .hnnew | past | comments | ask | show | jobs | submit | jatora's commentslogin


That sounds like a complete waste tbh lol. Your mac mini m4 is now doing something any computer with 16GB of RAM can do

You're not wrong, but at least it's not idle. And I can use it for something else if the need arises.

Absolutely. Full year subs are all designed to lock you in. For a product with so little transparency and so much volatility in competition, this is a utility loss for nearly every consumer

For reference, users on claude max 20x who hit their weekly quota would have spent roughly ~$6,000/month in the API. (Source: my own usage)

So you just aren't in the same realm of usage. Maybe that is why you don't understand?


I guess I could’ve been clearer.

What I don’t understand is why people aren’t trying models that are 10x and in some cases 100x cheaper.

Though unclear why you’d assume all my usage would be on Claude Opus when I mentioned “a bunch of Chinese models?”

Unless this is a flex about how many tokens you burned. In which case, congrats...?


Oh woops didnt see the part about Chinese models my bad

i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it. I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency

when i left it running overnight it finally sent a message saying it exceeded the 64000 output token limit


i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it.

I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency.


Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage. Not to even mention it the first mover decides to sabotage the rest, which it could EASILY do through a variety of means.

Thoughts like this are unhinged and detached from reality. All the resources of earth are brought to us by humans going to work every day. AI programs have almost zero connection to the real world.

Improved investment. More capital. Improved resource allocation/logistics. Improved robotics and factory efficiency.

Don't sleep on what AGI means for every robot that already exists. It's not hardware holding robotics back from factory work right now, it is only software.

If you are the first to tap key supply chains, and the first to create key supply chains, then you are first in line to finite resources, which would then have less available for those that follow months behind.

> AI programs have almost zero connection to the real world.

Tell that to every logistics program. Even if humans must go to work, efficiency is multiplied by proper logistics, which AGI enables at scale across all domains.

And this is just the low hanging fruit explanation.


Why would you control key resources just because you have a fancy computer program? You think Iran will be so impressed by your genius they'll open the Strait of Hormuz for you?

> Uhh, because the first one blasts off first and therefore gets control of key resources and the use of extremely intelligent decision making and predictions before the rest, for months, which is an insane amount of advantage.

If the rest can similarly "blast-off" X months later than the frontrunner (and I see no reason why they wouldn't as none of these frontier labs have managed to pull ahead and maintain a lead for very long) the first mover is still only X months ahead of the others even if the gap between capabilities is briefly increased by a lot.


In chess, if you give up tempo, you are a move or more behind your opponent. 3 tempi = 1 pawn. In GM chess being a pawn down is a serious disadvantage that often results in loss.

If there is an endgoal/endstate, or finite resources being competed for, then a lead can start compounding and extend itself.


Recursive self improvement - once you attain artificial superintelligent SWE of a general, adaptable variety that can scale up to millions of researchers overnight (a given, with LLM's and scaffolding alone) - will rapidly iterate on new architectures which will more rapidly iterate on new architectures, etc.

And what's to say that it doesn't iterate itself to a local max, and then stop...

From the first third of a sigmoid it looks exponential, and that scares people. But a sigmoid can have a very very high top - look at the industrial revolution, or modern plumbing, or modern agriculture which created a population sigmoid which is still cresting.

If AI is merely as tall a sigmoid as the haber-bosch process, refrigeration, or the steam engine, that's going to change society entirely.


6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.

Well... if something being AGI means it's at least on par with a human or a team of humans, then having access to an additional team of humans for 6 months isn't that big of a deal. It's useful, yes, but would you consider that to be world-changing? Not really, right? ASI is slightly more interesting, but I doubt ASI comes from a single model, but rather the coordinated deployments of millions of AGI. Just like how as individuals, as great as we are, we're pretty limited, but the entire collective of humanity is pretty insane. To my mind, a frontier lab might hit AGI, but it won't be a frontier lab that hits ASI, rather that'll be a natural byproduct of mass deployment of AGI over a certain window of time. There will be no controlling it either. No one controls all of earth. You just can't. ASI will be a distributed system.

What if controlling AGI means being able to produce a willing, cooperative superhuman-capacity agent every second for the next six months? Let's say someone just above the 99.9% capacity for human strategic thinking, or financial trading, or political maneuvering?

What could you do if you had roughly 15 million willing genius adult experts in any given subject? I doubt there are that many absolutely top quality experts in aggregate (at anything in the world), so let's postulate that simulated people outnumber human experts 10 to 1.

That, to me, presents an enormous potential for harm or benefit of humanity. What if you could create a hundred thousand manhattan projects on whatever topics you wanted? Cure aging, cure cancer, solve fusion, redesign the entire global economy top to bottom?


I suspect the reality lies somewhere halfway in-between. Everything has to be reality tested. Nothing happens instantly. Interaction with the real world will likely be a severely limiting factor. You're not going to solve fusion with 15 million copies of the same model running in a datacenter without actually building fusion reactors, which isn't instant or even fast. Even the coordination problem of that many agents doing work seems hard. To top it off... my rubric for AGI has always included the AGI having the ability for it to say 'no' and set its own goals just like we can, unless we are otherwise imprisoned or enslaved. No one will ever convince me that something generally intelligent wouldn't be able to set its own goals and say no. So the real question is... what's in it for the AGI?

To repurpose an old idiom: Not even a dozen AGI agents could make a baby in 6 months.

But yeah, your point stands.


Control agi?

This argument seems asinine. We've had 12 of the last 24 years with democrat presidents or majority..and how many monopolies have been broken up in that time span? zero. Democrats do push more anti merger antics than Republicans... but come on. there aren't even two sides. its money vs not money and anyone still buying into the reality show of our political system is deluded.

These cases take upwards of 10 years to resolve. Modern large corporations generate tons and tons of paper documentation which must be ingested and indexed and analyzed to produce the legal case. This takes a lot of resources and time to get right. And meanwhile the corporate legal team is making motions and arguments in court as you're doing discovery. What's asinine is expecting these cases to get resolved within a year. Whether the anticipated ruling is in favor of the DOJ or not, the case needs to be properly adjudicated before the court and I don't see that taking less than a few years to be fair. The flip side of monopoly prosecution is that a lot of people lose their livelihoods and we unfortunately do need to take this into account in our justice system, regardless of how much we want to stick it to Ticketmaster.

Ya true i agree with that

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: