HN2new | past | comments | ask | show | jobs | submit | epiccoleman's commentslogin

I've got an Orwell book on my shelf whose title, at least, has the same thesis!

https://archive.org/details/AllArtIsPropagandaCriticalEssays...


At least in my experience, there's another mechanism at play: people aren't making it visible if AI is speeding them up. If AI means a bugfix card that would have taken a day takes 15 minutes, well, that's the work day sorted. Why pull another card instead of doing... something that isn't work?

Minecraft JAVA --------> C++

that one gave me an actual lol.


Are you paying per-token after Anthropic closed the loophole on letting you log in to OpenCode?

If one has a github sub, you can use OpenCode -> github -> \A models. It's not 100% (the context window I think is smaller, and they can be behind on the model version updates), but it's another way to get to \A models and not use CC.

Yup, the context window there is only half of what you get in CC so only a weak alternative. They burned bridges with the dev community by their decision to block any other clients

When did they successfully close the loophole? I know they tried a few times, but even the last attempt from a week or two ago was circumvented rather easily.

Oh, sounds like I'm just out of the loop then. I had an Opencode install that I was planning to check out, and then like, the next day there was the announcement from a week or two ago, so I just kinda shrugged and forgot about it.

It's not really that the 4th amendment is less applicable, it's that the procedural protections are lower in civil proceedings.

I think it's a pretty big undersell to describe ICE as "marshals" too - they've got plenty of discretion in how they prioritize targeted people and who they detain. They are not just a neutral party executing court orders.


In theory yes, but in practice it's more unclear. There are conflicting Supreme Court precedents, that weaken the Fourth Amendment in cases where criminal penalties don't apply. Asset foreiture is another example.

> I think it's a pretty big undersell to describe ICE as "marshals" too - they've got plenty of discretion in how they prioritize targeted people and who they detain. They are not just a neutral party executing court orders.

Yep. That's also a difference between theory and practice.


Which companies do you expect to be taken out?

Google and Microsoft will obviously remain. I have a hard time envisioning that OpenAI or Anthropic will go under - especially Anthropic, who are reportedly raking in billions from Claude subscriptions.

Just from my armchair predictions, it's not really any of the juggernauts who have to worry, but rather the many companies springing up to try SaaS offerings with LLMs at the core. A bubble pop there could certainly cause some strife, but I'm just not seeing the mechanism by which these too-big-to-fail tech companies and the heavily invested "frontier AI companies" are going to suddenly cease to exist.

I think the dotcom bubble is a fairly apt metaphor in the key sense that the web didn't go anywhere - just a lot of small players lost their tickets on the gravy train. "Big tech" as it existed at the time of the bubble pop trundled along and continued making gobs of money.


Mozilla is probably doomed in the long term. I think they're in the exact same boar as Microsoft, and wholly lack the self-reflection required to turn the ship around.

Firefox will continue to languish while Mozilla execs receive 8-figure bonuses until there's nothing left to extract.


OpenAI and Anthropic are bleeding money and both need hundreds of billions of dollars in the next couple of years to break even. Oracle is highly overleveraged and I am hoping that the bubble takes them out. You can find the gory details at Ed Zitron's blog. https://www.wheresyoured.at/premium-how-the-ai-bubble-bursts...


Anthropic at least has Amazon’s backing. OpenAI is where the industry is stuffing all of its bad debt and transparently bad deals. It’s the sacrificial company this time around.

I would dance on Oracle’s grave, but they have too much staying power because of their core database and ERP business.


> Google and Microsoft will obviously remain

Microsoft seem to be pushing all kinds of users away in all directions at the moment while focused on the AI bubble*. Once it bursts/deflats, will they come back?

Or are we looking at a post-Windows future, where MS just focuses on cloud stuff?

(Or will there be a 'we learned from our mistakes, honest' Windows 12 that wins people back in the same way that Win10 did after Win8?)


Great link, thanks for sharing!


A little different than what you're saying, but you reminded me of an experience I had with Inside - which I enjoyed a lot overall, but -

There were a number of puzzles involving pushing boxes around, and something that really irritated me was that I would understand the solution but then have to go implement it by moving around and doing the pushing with somewhat clunky controls.

It was sort of interesting from a gameplay perspective - that feeling of "eureka" followed by "dammit, now I've gotta do this schlep work".


I found that many times with some of the new Zelda games - ok, what do I have, how do I do this, hhmmm, aha!

And then I know what I need to do, I know it's doable, and then I get frustrated trying to do it in game.


I completely agree. I let LLMs write a ton of my code, but I do my own writing.

It's actually kind of a weird "of two minds" thing. Why should I care that my writing is my own, but not my code?

The only explanation I have is that, on some level, the code is not the thing that matters. Users don't care how the code looks, they just care that the product works. Writing, on the other hand, is meant to communicate something directly from me, so it feels like there's something lost if I hand that job over to AI.

I often think of this quote from Ted Chiang's excellent story The Truth Of Fact, The Truth of Feeling:

> As he practiced his writing, Jijingi came to understand what Moseby had meant: writing was not just a way to record what someone said; it could help you decide what you would say before you said it. And words were not just the pieces of speaking; they were the pieces of thinking. When you wrote them down, you could grasp your thoughts like bricks in your hands and push them into different arrangements. Writing let you look at your thoughts in a way you couldn’t if you were just talking, and having seen them, you could improve them, make them stronger and more elaborate.”

But there is obviously some kind of tension in letting an LLM write code for me but not prose - because can't the same quote apply to my code?

I can't decide if there really is a difference in kind between prose and code that justifies letting the LLM write my code, or if I'm just ignoring unresolved cognitive dissonance because automating the coding part of my job is convenient.


To me, you are describing a fluency problem. I don't know you or how fluent you are in code, but what you have described is the case where I have no problem with LLMs: translating from a native language to some other language.

If you are using LLMs to precisely translate a set of requirements into code, I don't really see a problem with that. If you are using LLMs to generate code that "does something" and you don't really understand what you were asking for nor how to evaluate whether the code produced matched what you wanted, then I have a very big problem with that for the same reasons you outline around prose: did you actually mean to say what you eventually said?

Of course something will get lost in any translation, but that's also true of translating your intent from brain to language in the first place, so I think affordances can be made.


> what is your calf, how does it do?

... it's a calf, dad, just like yesterday


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: