Hacker News .hnnew | past | comments | ask | show | jobs | submit | rectang's commentslogin

At some point, there will be a successful copyright infringement suit against an LLM user who redistributes infringing output generated by an LLM. It could be the NYTimes suit, or it could be another, but it's coming — after which the industry will face a Napster-style reckoning.

What comes next? Perhaps it won't be that hard to assemble a proprietary licensed corpus and get decent performance out of it. Look at all the people already willing to license their voices.


OpenAI's valuation is more than basically all traditional media companies combined. Nvidia could buy the NYTimes with a month's worth of profits. The top 8 companies in the S&P 500 all benefit more from LLMs being successful than strict copyright enforcement. Congress has very broad power over copyright law. If a suit is successful there is a lot of money and power to be deployed to change copyright law.

And what happened after Napster? Filesharing totally stopped, right?

With the chinese in the mix it wont stop ai. It probably will change Copyright.


Spotify and Netflix happened.

they tweaked the model — users rent licensed copies of media instead of p2p downloading a pirated copy.

file sharing became far less popular and ubiquitous as a result of their popularity.

i’m tired of seeing this as an argument on HN — that because something didn’t hit 100% that implies it was a failure and not worth doing.

the fact that a limited subset of people still do filesharing is not evidence that the napster case had no effect.


Can you name an active filesharing app that's in use today? The action against Napster might not have killed filesharing, but it was p2p's Antietam.

The Bittorrent ecosystem is still very much around. I’m a cinephile who has a collection of nearly a thousand films in Blu-Ray image format, and 95% of that is off a tracker that is open even, not private.

And Soulseek is still known as the P2P source where you can find all kinds of obscure music.


How did the Napster suit change copyright?

We will see such attempts first against weaker target. Users who are not having the enterprise indemnifications.

The law exists to protect the elite and punish the underclass. We’re not in a Hollywood movie. Nothing will happen.


Copyright laundering is an illusion.

If the LLM generates output that a court decides is sufficiently derivative, and especially (but not necessarily) if the LLM was trained on the source material being infringed, then whoever redistributes the derivative output is going to be liable for copyright infringement.

Creation of the LLM itself is transformative, but LLM output which infringes is not.


Is it true then that if someone stole an entire code base from a vibe coded app from a non permissively licensed project and that person claimed that it was derived from an LLM and was not stolen at all that the person who stole the code is not a thief because it came from the same place? Or are they a thief because someone else copyrighted it? How do vibe coders protect themselves not knowing who else has the same derivative code or who holds the copyright first? Or can't they?

The only thing a vibe coder should be able to copyright, is the prompt text they wrote. Not the output of the LLM, only the text they wrote to instruct the LLM what to do. And even that is pretty iffy, because most of it like "put a button on a page" is not copyright-able.

Will Blender start allowing Anthropic to train on your art automatically unless you opt-out?

No. You can read about what the various sponsorship levels entails here[1].

Blender already has ton of other Corporate Patron level sponsors, such as Netflix, Meta, Intel, BMW, Adobe and others.

[1]: https://fund.blender.org/corporate-memberships/


How could that possibly work?

I once thought the same about all the copyrighted works on which LLMs are currently trained. Surely they can't just hoover everything up? Haha, silly me.

I understand that creating an LLM itself is transformative, but an LLM trained on copyrighted works remains capable of generating derivative works, which eventually will result in successful copyright lawsuits against LLM users who redistribute those derivative works.

In advance of that day, the great race is to build a licensed corpus as aggressively as possible (see Github's latest decision to opt in Copilot usage). Even if Blender doesn't send your data on every save, various options can be developed, such as publishing to a Blender-controlled public channel.


I'm relatively sure the source code I've written and stored in my local computer is not sucked up in the LLM training data. And I believe people working with Blender models are pretty much in the same situation: they don't host their data in a third-party service and openly share it.

There's absolutely no precident for Blender Foundation sponsorships leading to such things... So no, they probably won't do that.

Only for the SaaS customers

Trying to find a silver lining and think positively...

Will a future administration have an opportunity to build something new and better from scratch which would not have been possible due to institutional resistance before it was all burnt down?


If we're really, really lucky.

Destroying institutions is one heck of a lot easier than building new ones.


It's not even about rebuilding. Some things when destroyed can never be recreated, like trust, oceanliners, or the practice of Dísting. The initial event of destruction creates an expectation that it will happen again. Once it does happen the process accelerates itself until the full expectation is that whatever thing, concept, or practice can never exist again as anything more than a fleeting revival.

Institutional longevity is what differs developed countries from failed or failing states. Whole point of having institutions is to make sure rules dont change every 4, 6 or 8 years.

Some amazing new administration can come up with tons of good ideas, but they will only become real institutions if they survive for decades to come. Institutions are not just government agencies, law or people. Tradition and longevity are probably even more important.

Do you want to build a company in a country where all the law, tax code and regulations are replaced with amazing but brand new one every 4 years? Probably not?

And changing rules are much worse for scientific research because most often it span decades or even generations of scientists. People will just choose to go live and work somewhere more stable.


That’s the best case scenario - requires a lot of people currently involved in this to be jailed or executed before we can even begin to move on though. I’m not super optimistic.

Let us not say executed.

It's a harsher punishment that they live to see the rebuild of what they turned to ash.


Let us not hesitate to seek justice.

I fear that the capture of American media and the DOJ is too far gone, and that following through with proper punishment for the naked corruption of this regime would be unpopular. “Let bygones be bygones.”

Oh, to live in an America where white-collar crimes and financial treason were actually punished…


I'm very much with you for "punish" and "jail", but if you are insisting on execution like commenters upthread, we will part ways there.

The punishment for treason can be death. Risking the lives of American intelligence agents and military operators for financial gain is plainly treasonous. I am not convinced beyond a doubt that that has not happened in this government.

With your focus on death as punishment — the day after an apparent assassination attempt no less — it seems we have less in common than I supposed. I’m not with you after all.

And if we find out that at least one of these assassination attempts was staged as a part of the greater kayfabe, what would your reaction be then?

Aside from the point, my support for judicial executions due to treason against the United States is not support for extrajudicial executions, like the ones that ICE has been carrying out. I think Italy and Romania are decent examples of countries becoming better off after executing their authoritarian figureheads, although I don’t think that there are grounds for that in the United States yet.

Execution != assassination, as the latter has a lot less success in creating real, long-lasting change than the former.


Neither the Butler, Pennsylvania nor the White House Correspondents Dinner incidents were staged — that's conspiracy theory nutter nonsense.

(Not that there aren't elements within the Trump administration who might contemplate a false flag operation, but it will be obvious if they try it because they aren't clever enough to pull it off.)

Furthermore, both Ceaucescu and Mussolini were extrajudicial executions.

Nixon is a better precedent for the USA because it demonstrates that we're already capable of ejecting a criminal president without bloodshed. It wasn't a perfect process (the Ford pardon was terrible) but in the end Nixon still lost power.


Describing the political machinations of institutional academia as a category where summary executions are applicable is the type of thing that led to the Soviet Union instituting Lysenkoism for decades and other profoundly anti-intellectual absurdities since all the academics were just randomly killed for a generation. We don't need that. That's hysterical emotional overreaction which is the opposite of rational academic behavior. The NSF will just get funding in the next administration, this isn't the end of the world. If they just hasten the grant awarding pipeline in 2028, it'll be a blip in the scheme of things lol, these grants can be like 5 years long. You're talking about a field of very smart people, everyone is just being more frugal and putting off big purchases and doing research that isn't expensive and things aren't blowing up lol.

Even if so, it doesn't matter, because 4 - 8 years later it'll be reversed again. And because it takes longer to rebuild than dismantle, it will never be the same.

This is the cycle now. 180 degree turns in policy every 4 or 8 years. There's no long term planning.

China and Russia must be enjoying this.


I feel like I'm using Claude Opus pretty effectively and I'm honestly not running up against limits in my mid-tier subscriptions. My workflow is more "copilot" than "autopilot", in that I craft prompts for contained tasks and review nearly everything, so it's pretty light compared to people doing vibe coding.

The market-leading technology is pretty close to "good enough" for how I'm using it. I look forward to the day when LLM-assisted coding is commoditized. I could really go for an open source model based on properly licensed code.


I also use it this way and I'm overall pretty happy with it, but it feels like they really want us to use it in "autopilot" mode. It's like they have two conflicting priorities of "make people use more tokens so we can bill them more" and "people are using more tokens than expected, our pricing structure is no longer sustainable"

(but I guess they're not really conflicting, if the "solution" involves upgrading to a higher plan)


I feel like they are making it harder to use it this way. Encouraging autonomous is one thing, but it really feels more like they are handicapping engaged use. I suspect it reflects their own development practices and needs.

This is something I've thought of as well. The way the caps are implemented, it really disincentivizes engaged use. The 5-hour window especially is very awkward and disruptive. The net result is that I have to somewhat plan my day around when the 5-hour window will affect it. That by itself is a powerful disincentive from using Claude. It has also caused me to use different tools for things I previously would have used Claude for. For example, detailed plans I use codex now rather than Claude, because I hit the limit way too fast when doing documentation work. It certainly doesn't hurt that codex seems to be better at it, but I wouldn't even have a codex subscription if it wasn't for claude's usage limits

Wow, weird to see someone mirror my experience so closely. At the $100 plan my day was being warped around how to maximise multple 5 hour sessions so that it felt worth it. Dropped down to the $20 plan and stopped playing the game as I know I'll just consume the weekly usage in the few days I have free. Meanwhile codex gave me a free month, their 5HourUsageWindow:WeeklyUsageWindow ratio feels way better balanced and it gets may more work done from it. Similar to you, any task involving reading/reviewing docs [or code reviews] now insta-nukes claude's usage. My record is 12 minutes so far...

Another big one for me is that they dropped the cache TTLs. It is normal for me to come back to a session an hour later, but someone "autopilot"-ing won't have such gaps.

not just the cache though. every time you stop and come back, it basically reloads the whole session. if you just let it keep going, it counts like one smooth run. you hit the wall faster for actually checking its work.

It was probably the bug about cache getting purged after 5min rather than 1hour. You can review things pretty well within an hour. 5min is a real crunch. 5min doesn't mix with multitasking or getting interrupted.

I think the culty element of AI development is really blinding a lot of these companies to what their tools are actually useful for. They’re genuinely great productivity enhancers, but the boosters are constantly going on about how it’s going to replace all your employees and it’s just. . .not good for that! And I don’t mean “not yet” I mean I don’t see it ever getting there barring some major breakthrough on the order of inventing a room-temp superconductor.

I agree with you, the "replacing people" narrative is not only wrong, it's inflammatory and brand suicide for these AI companies who don't seem to realize (or just don't care) the kind of buzz saw of public opinion they're walking straight towards.

That said, looking at the way things work in big companies, AI has definitely made it so one senior engineer with decent opinions can outperform a mediocre PM plus four engineers who just do what they're told.


autopilot (yolo mode) is amazing and feels great, truly delegate instead of hand-holding on every step

Do you have any good resources on how to work like that? I made the move from "auto complete on steroids" to "agents write most of my code". But I can't imagine running agents unchecked (and in parallel!) for any significant amount of time.

Right now, I'm finding a decent rhythm in running 10-20 prompts and then kind of checking the results a few different ways. I'll ask the agent to review the code, I'll go through myself, I'll do some usability and gut checks.

This seems to be a good window where I can implement a pretty large feature, and then go through and address structural issues. Goofy thinks like the agent adding an extra database, weird fallback logic where it ends up building multiple systems in parallel, etc.

Currently, I find multiple agents in parallel on the same project to be not super functional. Theres just a lot of weird things, agents get confused about work trees, git conflicts abound, and I found the administrative overhead to be too heavy. I think plenty of people are working on streamlining the orchestration issue.

In the mean time, I combat the ADD by working on a few projects in parallel. This seems to work pretty well for now.

It's still cat herding, but the thing is that refactors are now pretty quick. You just have to have awareness of them

I was thinking it'd be cool to have an IDE that did coloring of, say, the last 10 git commits to a project so you could see what has changed. I think robust static analysis and code as data tools built into an IDE would be powerful as well.

The agents basically see your codebase fresh every time you prompt. And with code changes happening much more regularly, I think devs have to build tools with the same perspective.


Same here, especially when I keep catching things like DRY violations and a lack of overall architecture. Everything feels tacked on.

To give them the benefit of doubt, perhaps these people provide such detailed spec that they basically write code in natural language.


It will come naturally! I have started with autocomplete as well. I was stumbling upon different problems and was fixing them by implementing best practices. Current stack is:

1/ Claude Code with yolo mode

2/ superpowers plugin

3/ red/green tdd

4/ a lot of planning and requirements before writing any code

It feels like you always touch this edge of capability of models and your current workflow. Delegate more complex task, and system fails. Delegate more simple and system works great. Improve your workflow and move this complexity to a higher level.


But... I am llm power user for more than a year and a half now. I cant delegate exactly because ive reviewed a lot of llm's code, and it is never good enough for me to step down from reviewing everything manually. I can understand how you can vibe code dashboard or tests, but vibe code your entire backend without checking it thru carefully? Madness.

I would also be interested on resources on "agents write most of your code" if you can share some.

For me you open a markdown editor and draft up a code plan and details of what you'd do as a coder at a high level then bust into whatever tool in planning mode (I usually fire this into the opus 4.5 model) and have it break it down into concise steps and then hand it off to a simple model (gpt spark, sonnet, composer or whatever) to execute. when I feel frisky I'll just have opus one shot it and it can be done in a few minutes.

I use Claude “on the web” or Google Jules. Essentially everything happens in a sandbox - so yolo isn’t a huge risk. You can even box its network access. You review the PR at the end or steer it if it’s veering off course.

I have Max 5x and use only Claude Opus on xhigh mode. I don't use agents, or even MCPs, and stick to Claude Code.

I find it incredibly difficult to saturate my usage. I'm ending the average week at 30-ish percentage, despite this thing doing an enormous amount of work for (with?) me.

Now I will say that with pro I was constantly hitting the limit -- like comically so, and single requests would push me over 100% for the session and into paying for extra usage -- and max 5x feels like far more than 5x the usage, but who knows. Anthropic is extremely squirrely about things like surge rates, and so on.

I'm super skeptical of the influx of "DAE think Opus sucks now. Let's all move to Codex!" nonsense that has flooded HN. A part of it is the ex-girlfriend thing where people are angry about something and try to force-multiply their disagreement, but some of it legitimately smells like astroturfing. Like OpenAI got done pay $100M for some unknown podcaster and start hiring people to write this stuff online.


I was in the same boat until last few days, where just a handful queries were enough to saturate my 5h session in about 30 mins.

Recently I've gotten Qwen 3.6 27b working locally and it's pretty great, but still doesn't match Opus; I've gotten check out that new Deepseek model sometime.


Yea, I never got how people are even able to hit the weekly limits so consistently. Maybe it's because they use it for work? But in that case, you would expect the employer to cover it so idk.

>I'm super skeptical of the influx of "DAE think Opus sucks now. Let's all move to Codex!" nonsense that has flooded HN. A part of it is the ex-girlfriend thing where people are angry about something and try to force-multiply their disagreement, but some of it legitimately smells like astroturfing. Like OpenAI got done pay $100M for some unknown podcaster and start hiring people to write this stuff online.

A lot of people are angry about the whole openclaw situation. They are especially bitter that when they attempted to justify exfiltrating the OAuth token to use for openclaw, nobody agreed with them that they had the right to do so, and sided with Claude that different limits for first-party use is standard. So they create threads like this, and complain about some opaque reason why Anthropic is finished (while still keeping their subscription, of course).


If only OpenAI spent a significant amount of money on some kind of generative software that was predominantly trained on internet comments that'd be able to do all the astroturfing for them...

A bunch of green accounts would be a bit of a tell. They need to use established accounts, ideally pre-llm, for astroturfing. This is going to be increasingly true.

This kind of "if only" sarcastic comment belongs on reddit from 5 years ago

> the day when LLM-assisted coding is commoditized

Like yesterday? LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

My definition of LLM-assisted coding is that you fully understand every change and every single line of the code. Otherwise it's vibe coding. And I believe if one is honest to this principle, it's very hard to deplete the quota of the $100 tier.


> Like yesterday? LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

But, it's not $100/mo. I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see. With code generation the results are less clear for many users. Especially when things "just work".

Again, it's not $100/month for Anthropic to serve most uses. These costs are still being subsidized and as more expensive plans roll out with access to "better" models and "more* tokens and context the true cost per user is slowly starting to be exposed. I routinely hit limits with Anthropic that I hadn't been for the same (and even less) utilization. I dumped the Pro Max account recently because the value wasn't there anymore. I am convinced that Opus 3 was Anthropic's pinnacle at this point and while the SotA models of today are good they're tuned to push people towards paying for overages at a significantly faster consumption rate than a right sized plan for usage.

The reality is that nobody can afford to continue to offer these models at the current price points and be profitable at any time in the near future. And it's becoming more and more clear that Google is in a great position to let Anthropic and OAI duke it out with other people's money while they have the cash, infrastructure and reach to play the waiting game of keeping up but not having to worry about all of the constraints their competitors do.

But I'd argue that nothing has been commoditized as we have no clue what LLMs cost at scale and it seems that nobody wants to talk about that publicly.


> I think the best showcase of where AI is at is on the generative video side. Look at players like Higgsfield. Check out their pricing and then go look at Reddit for actual experiences. With video generation the results are very easy to see

Video is a different ballgame entirely, its less than realtime on _large_ gpus. moreover because of the inter-frame consistency its really hard to transfer and keep context

Running inference on text is, or can be very profitable. its research and dev thats expensive.


My point wasn't the delta in work between video and text generation. It was that the degradation of a prompt is much more visible (because: literal). But, generally agree on the research/dev part.

> fully understand every change and every single line of the code.

im probably just not being charitable enough to what you mean, but thats an absurd bar that almost nobody conforms to even if its fully handwritten. nothing would get done if they did. But again, my emphasis is on that im probably just not being charitable to what you mean.


You're most likely being pedantic, like when someone says they understand every single line of this code:

    x = 0
    for i in range(1, 10):
      x += i
    print(x)
They don't mean they understand silicon substrate of the microprocessor executing microcode or the CMOS sense amplifiers reading the SRAM cells caching the loop variable.

They just mean they can more or less follow along with what the code is doing. You don't need to be very charitable in order to understand what he genuinely meant, and understanding code that one writes is how many (but not all) professional software developers who didn't just copy and paste stuff from Stackoverflow used to carry out their work.


you drew it to its most uncharitable conclusion for sure, but ya thats pretty much the point i was making.

How deep do i need to understand range() or print() to utilize either, on the slightly less extreme end of the spectrum.

But ya, im pretty sure its a point that maybe i coulda kept to myself and been charitable instead.


Understand your code in this day and age likely means hit the point of deterministic evaluation.

print(X) is a great example. That's going to print X. Every time.

Agent.print(x) is pretty likely to print X every time. But hey, who knows, maybe it's having an off day.


How is that an absurd bar? If you're handwriting code, you'd need to know what you actually want to write in the first place, hence you understand all the code you write. Therefore the code the AI produces should also be understood by you. Anything else than that is indeed vibe coding.

A lot of developers don't actually understand the code they write. Sure nowadays a lot of code is generated by LLMs, but in the past people just copied and pasted stuff off of blogs, Stack Overflow, or whatever other resources they could find without really understanding what it did or how it worked.

Jeff Atwood, along with numerous others (who Atwood cites on his blog [1]) were not exaggerating when the observed that the majority of candidates who had existing professional experience, and even MSc. degrees, were unable to code very simple solutions to trivial problems.

[1] https://blog.codinghorror.com/why-cant-programmers-program/


its an absurd bar if you are being a uncharitable jerk like i was, the layers go deep, and technically i can claim I have never fully grasped any of my code. It is likely just a dumb point to bring up tbh.

I saw your reply to another comment [0], I see what you mean now. By "understand each line of code" I meant that one would know how that for loop works not the underlying levels of the implementation of the language. I replied initially because lots of vibe coding devs in fact do not read all the code before submitting, much less actually review it line by line and understand each line.

[0] https://hackernews.hn/item?id=47894279


Well that is how it mostly worked until recently... unless if the developer copied and pasted from stackoverflow without understanding much. Which did happen.

Could they have meant "every line of code being committed by the LLM" within the current scope of work?

That's how I read it, and I would agree with that.


It's a good point. To me this really comes down to the economics of the software being written.

If it's low-stakes, then the required depth to accept the code is also low.


I do. If you don't, maybe you shouldn't be writing software professionally. And yes, I've written both DBs and compilers so I do understand what is happening down to the CMOS. I think what you are doing is just cope.

nah, you're kinda encapsulating what i viewed in my mind:

at what level of abstraction can you claim to actually "understand" the code?

You're claiming to understand down to the CMOS, but you are failing to even engage with what level understanding should be accepted. is "down to the CMOS" the bar? because then you're gonna be on an uphill battle as potentially the only human who traces a simple hello world python script down to it, because thats not how people develop software with high level languages.

is understanding the print()'s underlying code the bar? seems fairly gatekeepy, its kinda intuitive what a print does, everyone trusts its gonna do what its designed to do in the same way we trust the water that comes out of our faucets.


I mean "understanding it just like when you hand wrote the code in 2019."

Obviously I don't mean "understanding it so you can draw the exact memory layout on the white board from memory."


You don't understand every change you make in the PRs you offer for review?

>LLM-assisted coding is $100/mo. It looks very commoditized when most houses in developed world pay more for electricity than that.

this is a small nit, but you still have to pay your electric bill, the $100/mo is on top of that. if you're doing cost accounting you don't want to neglect any costs. Just because you can afford to lease a car, doesn't mean you can afford to lease a 2nd car.


You mean the regular electricity bill of your house and computer use? Computation runs on the cloud so not sure what you’re trying to argue here

Commoditization will be complete for my purposes when an LLM trained on a legitimately licensed corpus can achieve roughly what Opus 4.5+ or the highest powered GPTs can today.

I anticipate a Napster-style reckoning at some point when there's a successful high-profile copyright suit around obviously derivative output. It will probably happen in video or imagery first.


In industry, the cost is more than 100/mo for engineers. With increased adoption and what I know now, I expect full time devs to rack up $500-$2000 usage bills if they're going full parallel agentic dev. Personal usage for projects and non-production software is not a benchmark IMO

I work with a lot of full-time devs, and it is very hard to go beyond the $200 max plan. If you use API credits, and I think the enterprise plan kind of forces you to do this, you can definitely incur this much, particularly if you're not using prompt caching and things like that.

But I and others in my company have very heavy usage. We only rarely, with parallel agentic processes, run out of the $200 a month plan.

And what do I mean by "hard"? I mean, it requires a lot of active thinking to think about how you can actively max it out. I'm sure there's some use cases where maybe it is not hard to do this, but in general, I find most devs can't even max out the $100 a month plan, because they haven't quite figured out how to leverage it to that degree yet.

(Again, if someone is using the API instead of subscription, I wouldn't be surprised to see $2,000 bills.)


Business/Enterprise accounts are billed at $20/seat + API prices, not subscription prices. You can give them a monthly dollar quota or let them go unlimited, but they're not being subsidized like in team. And team can't get a 20x plan from what I can tell.

I routinely use $4k to $5k worth of tokens a month on my $200/mo Max subscription. I don't even code every day.

You can use a Max subscription for work, btw.


You do understand the concept of a subsidy right?

I do. Do you? A company providing a cheaper subscription plan is not a subsidy.

I assume you meant loss-leader. We can’t know that without knowing their financials. The actual marginal cost of inference is demonstrably less than $200/mo though, so it’s not clear whether they are operating at a loss. Without seeing their books we can’t know.


I'd recommend Kimi k2.6 for your use. It is an excellent model at a fraction of the cost, and you can use Claude Code with it.

I did a 1:1 map of all my Claude Code skills, and it feels like I never left Opus.

Super happy with the results.


I was saying the same until DeepSeek v4 this morning... sorry, Kimi. The competition is intense!

Fascinated, a bummer that DeepSeek does not offer a DPA or opt-out for training. This renders it unusable for my use cases unfortunately. At least z.ai GLM has a somewhat DPA in Singapore.

The weights are open and you can use the model with any third party provider that gives you the DPA you want.

For my use-case, I want the providers to get my tokens as long as they plan to keep releasing open-weight models


If you don't use a lot of quota the cheapest monthly Claude Code is $20, Kimi Code is $19, i.e. the cost difference is minuscule.

Kimi wants my phone number on signup so a no-go for me.


What provider do you use for Kimi

The provider is a massive issue. People moving off Claude tend to assume this is solved.

Claude's uptime is terrible. The uptime of most other providers is even worse...and you get all the quantization, don't know what model you are actually getting, etc.


Kimi 2.5 was like using Sonnet 4 on a flaky ADSL line. I haven't tried K2.6 yet, but the physical unreliability of the connection was too off-putting.

OpenRouter and I'm toying around with Hermes. Seems good so far, but haven't really gotten into anything heavy yet. Though the "freedom" of not sweating the token pause and the costs not being too high is real.

Straight from them, but I know other providers like io.net can be faster but I like to directly support the project.

Thx. I'll try with my personal projects (because dues to the data collection and ToS most providers are forbidden in my company), if I can opt out of training on my input.

I'm just getting a but tired of using Opus 2.6 which eats my whole allowance and then some £££ going through the 4kB prompt to review ~13 kB text file twice - and that's on top of the sometimes utter bonkers, bad, lazy answers I'm not getting even from the local Gemma 4 E4B.


did you just copy-paste or is there a difference in the way kimi uses skills?

I don’t have the prompt at hand but basically I told Kimi (paraphrasing): I have these Claude code skills, and I know it uses different tool calls than you but read them and re-write them as your own tools.

I also created a mini framework so it can test that the skills are actually working after implementation.

Everything runs perfectly.


Same. Never hit a limit. Use it heavily for real work. Never even thought of firing off an LLM for hours of...something. Seems like a recipe for wasting my time figuring out what it did and why.

Similar with the copilot and not autopilot usage. I find its the best of them all. Mostly i just use it as an occasionnal search engine. I've never found LLMs to be efficient to actually do work. I do miss the day when tech docs were usable. Claude seems like a crutch for gaps in developer experience more than anything.

Honestly, it sounds like, assuming you have no ethical qualms, you could get by with a Mac or AMD 395+ and the newest models, specifically QWEN3.5-Coder-Next. It does exactly as you describe. It maxes out around 85k context, which if you do a good job providing guard rails, etc, is the length of a small-medium project.

It does seem like the sweet spot between WallE and the destroyed earth in WallE.


Sorry, out of the loop. Which ethical qualms are you referring to?

Using a Mac, obviously.

I have ethical qualms to varying degrees with most LLMs, primarily because of copyright laundering.

I'm a BSD-style Open Source advocate who has published a lot of Apache-licensed code. I have never accepted that AI companies can just come in and train their models on that code without preserving my license, just allowing their users to claim copyright on generated output and take it proprietary or do whatever.

I would actually not mind licensing my work in an LLM-friendly way, contributing towards a public pool from which generated output would remain in that pool. Perhaps there is opportunity for Open Source organizations to evolve licenses to facilitate such usage.

For what it's worth, I would be happy to pay for a commercial LLM trained on public domain or other properly licensed works whose output is legitimately public domain.


My guess - China.

Seems like AMD 395+ is only about 16 tokens/s which is 25-33% the speed of SOTA models. Break even on a $3000 machine is ~15 months

thats pessimistic. do the calc assuming Cloud provider X changes your nondetermistic output every Y Months by Z probability and increases prices by 10% every 6 months.

slow and steady is worth exponentials. keep slopppping it my boid.


The broader theme of antagonism to Black success motivating the thoroughness of the destruction is a common observation about Tulsa.

honestly the entire country up until maybe 40 or so years ago

Arguably happening right now due to a joke Obama made about Trump

[flagged]


Here's a contemporary opinion, from the state attorney general at the time, the highest ranking person in a judicial apparatus that didn't prosecute anyone for participating in it. Looks like the fact that "the Negro" was so rich he didn't "accept the white man as his benefactor" was a pretty big deal...

The cause of this riot was not Tulsa. It might have happened anywhere for the Negro is not the same man he was thirty years ago when he was content to plod along his own road accepting the white man as his benefactor. But the years have passed and the Negro has been educated and the race papers have spread the thought of race equality.


There is no discussion of wealth in your quote. And further, that quote supports what I've been saying.

It specifically says "the cause of this riot was not Tulsa", and "It might have happened anywhere". If it "might have happened anywhere", it therefore has nothing to do with the unique high-wealth of this area.


Takes a lot of cognitive dissonance to unironically suggest that the axiomatic Southern racist belief that "the Negro" should regard "the white man as his benefactor" has no links to their relative wealth.

When you find yourself drawing parallels between your own arguments and those of contemporary white supremacists asserting that the attitudes of local whites were not at all to blame, it's perhaps a good idea to reconsider...


Wow. Your own quote validates what I'm saying, but regardless of your position that it isn't so...

Delving into "you're a racist if you don't agree with my take on this event" is a very, very scummy and morally bankrupt thing to do. Especially since you're literally making up that comparison, for nothing I've said indicates I absolved anyone of anything at all. In fact, I've been asserting that whites of the time didn't need a special reason, like wealth, to do what they did.

People asserting that wealth was required to cause this tragedy, are actually giving excuses as to why it happened. Racism doesn't require that.


Nice to see sealioning is alive and well on HN.

But do we actually have a proof that ** bombed *, maybe they bombed their own school to make you feel sad?

Seeing another one of these breaches had me returning to look at local-first software. https://lofi.so

I feel like if we're going to make progress in preventing wholesale data breaches it will be through architectural innovations that attack the problem of why a trove of concentrated data needs to exist. Even if the government needs to be a central authority, are there ways to house the data that limit the blast radius?

I'm sure there are innumerable arguments why this can't help, but when the mainstream alternative is despair and helplessness, progress will be made in the margins.


I see lots of speculation that Anthropic needs to cut usage because they are compute constrained. If that's the case, will they be focusing on reducing compute costs for their models?

From what I can tell Opus 4.7 is more resource-intensive than Opus 4.6 is more resource-intensive than Opus 4.5.


I'm unpersuaded by the assertion that closing the source is an effective security bulwark.

From that page:

> Today, AI can be pointed at an open source codebase and systematically scan it for vulnerabilities.

Yeah, and AI can also be pointed at closed source as soon as that source leaks. The threat has increased for both open and closed source in roughly the same amount.

In fact, open source benefits from white hat scanning for vulnerabilities, while closed source does not. So when there's a vuln in open source, there will likely be a shorter window between when it is known by attackers and when authors are alerted.


The HN discussion on the announcement is just 90% posts of the theme "if a student can brute force your FOSS for $100, they can do you proprietary code for $200" and "if it's that cheap to find exploits, why don't you just do it yourself before pushing the code to prod?"

I believe that the reason the chose to close the source is just security theater to demonstrate to investors and clients. "Look at all these FOSS projects getting pwned, that's why you can trust us, because we're not FOSS". There is, of course, probably a negative correlation between closing source and security. I'd argue that the most secure operating systems, used in fintech, health, government, etc, got to be so secure specifically by allowing tens or hundreds of thousands of people to poke at their code and then allowing thousands or tens of thousands of people to fix said vulns pro bono.

I'd be interested to see an estimation of the financial value of the volunteer work on say the linux or various bsd kernels. Imagine the cost of PAYING to produce the modern linux kernel. Millions and possibly billions of dollars just assuming average SWE compensation rates, I'd wager.

Too bad cal.com is too short sighted to appreciate volunteers.


> Millions and possibly billions of dollars just assuming average SWE compensation rates

Yeah, and average kernel devs are not average SWEs


I think it's more prosaic, OSS is great for building a userbase but not great at generating revenue. So just wave the OSS flag while you build a userbase, then pull out whichever flimsy excuse seems workable at the time when you want to start step two of your enshittification plan.

The only thing new here is the excuse.


How are LLMs at reading assembly? I assumed they’d be able to read assembly about as well as any other language…

Is there such a thing as a closed source program anymore?


Not only are they good at reading and writing machine code now, they are actively being used to turn video game cartridge dumps back into open source code the community can then compile for modern platforms.

There is no moat anymore.


They are REALLY good at it.

A much better argument would be "if you can point the AI to scan it for vulnerabilities, why not do that yourself and fix the vulnerabilities"?

If you believe they really did it for security, I have a very nice bridge to sell you for an extremely low price ...

Look, tech companies lie all the time to make their bad decisions sound less bad. Simple example: almost every "AI made us more efficient" announcement is really just a company making (unpopular) layoffs, but trying to brand them as being part of an "efficiency effort".

I'd bet $100 this company just wants to go closed source for business reasons, and (just like with the layoffs masquerading as "AI efficiency") AI is being used as the scapegoat.


Who says I believe it? ;)

I'm just choosing to focus on the substance of the argument itself, which I think is risible regardless of who makes it and why.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: