HN2new | past | comments | ask | show | jobs | submit | drzaiusx11's commentslogin

Afaict the "desktop" target is meaningless these days. Desktops aren't really a thing anymore in the general sense are they? Only folks I know still hanging on to desktop hardware are gamers and even those I see going by the wayside with external video cards becoming more reliable.

"Daily driver" is probably a better term, but everyone's daily usage patterns will vary. I could do my day job with a VT100 emulator on a phone for example.


There's a zillion office workers that have low cost mini PCs from the big OEMs on their desk. After all, all those off-lease mini PCs on eBay that are so beloved by home lab enthusiasts have to come from somewhere.

For whatever I don't really register those little hockey pucks (mac minis, NUCs, etc) the same way as the desktop tower PCs of old. A me problem for sure, but those mini device things vary _wildly_ in capabilities manufacturer to manufacturer, from full blown intel i9s to little more than headless phones running ChromeOS on an underpowered ARM. Desktops _used_ to be fairly standardized to one CPU arch, same order of magnitude RAM, ran the Windows du jour, etc. Today the landscape isn't so monotonous (and thats a good thing!)

The "desktop" market includes laptops but excludes servers, phones, tablets, etc.

Asahi Linux is fantastic these days, but as with most linuxes on laptops the power management / battery life is the worst part. If treating a laptop like a portable desktop is ok for your use case you'd be plenty happy. If you're far away from an outlet for too long however, you'll find it lacking. At least that's my experience. It's possible they eventually figure that out too...

Been using Colima to run mixed architecture container stacks in docker compose on my M3 Mac and the machine barely blinks. I get a full day running a dozen containers on a single battery charge.

Colima is backed by qemu, not Rosetta, so if Rosetta disappeared tomorrow I don't think I'd notice. I'm sure it's "better" but when the competition is "good enough" it doesn't really matter.


The acceleration of Anthropic's evil timeline must be from all those AI productivity gains we hear so much about.

Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...

In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.

Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.

Non-profits where the CEO makes millions or billions are a joke.

And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.


"A very modest salary cap" works if your mission is planting trees. Not so much if what you're building is frontier AI systems.

I think that's the point though. The AI companies can't compete without hiring very talented employees and raising lots of money from investors. Neither the employees nor investors would participate if there weren't the potential for making mountains of money. So these AI companies fundamentally can't be non-profits or true B-corps (I realize that's a vague term, but the it certainly means not doing whatever it takes to make as much money as possible), and they shouldn't pretend they are.

To me, it feels like saying "you can't be a public benefit corporation unless all the labor involved in delivering that public benefit is cheap".

Which just doesn't seem like it should be true?

Sure, some "public benefit" missions could scale sideways and employ a lot of cheap labor, not suffering from a salary cap at all. But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.


>But other missions would require rare high end high performance high salary specialists who are in demand - and thus expensive. You can't rely on being able to source enough altruists that will put up with being paid half their market worth for the sake of the mission.'

That's exactly what a non-profit should be able to rely on. And not just "half their market worth", but even many times less.

Else we can just say "we can't really have non-profits, because everybody is a greedy pig who doesn't care about public benefit enough to make a sacrifice of profits - but still a perfectly livable salary" - and be done with it.


This would shutdown about half the hospitals in the US.

A, US healthcare, that paragon of value-for-money and not-for-profitness...

Yeah I’m sure the fix for that is to shutdown or transition all of the remaining non-profit hospitals to a for profit model.

That's a post hoc argument.

The real danger is "We make mountains of money, but everyone dies, including us."

The top of the top researchers think this is a real possibility - people like Geoffrey Hinton - so it's not an extremist negative-for-the-sake-of-it POV.

It's going to be poetic if the Free Markets Are Optimal and Greed-is-Rational Cult actually suicides the species, as a final definitive proof that their ideology is wrong-headed, harmful, and a tragic failure of human intelligence.

But here we are. The universe doesn't care. It's up to us. If we're not smart enough to make smart choices, then we get to live - or die - with the consequences.


If a non-profit can't attract people not motivated except by profit, perhaps it shouldn't exist.

While I agree, if you need high profits to survive, you're not off to a great start as a nonprofit.

It’s not the CEO’s fault - they had to take all that money to keep their org a non-profit.

B corps are like recycling programs, a nice logo.


Don't they get tax breaks and more lax operating requirements? I don't think this is just an image thing.

No, under US law charities and non-profits are typically eligible for some kinds of tax benefits but public benefit corporations are not.

Are you saying that recycling is a scam?


It really depends on the type of material and country. Many monoplastics and almost all cardboard can be recycled and is (e.g. in Germany and other European countries).

> Recycling mostly means "sent to landfills in the third world"

This is less true now that China banned plastic waste imports.

I agree though that the average person might overestimate how much of their waste can be recycled. However, many materials are recycled and then re-used, so it's not like the whole concept is a scam.


Mostly, yeah. "Yet the industry spent millions telling people to recycle, because, as one former top industry insider told NPR, selling recycling sold plastic, even if it wasn't true." https://www.npr.org/2020/09/11/897692090/how-big-oil-misled-...

Many recycling programs don't actually recycle.

https://www.cbsnews.com/news/critics-call-out-plastics-indus...


If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.

What's the salary cap for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger.

>for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger

Weaker is fine if those working there are actually true to the mission for the mission, are not for the profit.

Same with FOSS really, e.g. I'd rather have a weaker Linux that's an actual comminity project run by volunteers, than a stronger Linux that's just corporate agendas, corporate hires with an open license on top.


You're overthinking this. Just give the beneficiaries of the corporation (which in the context of a "public" benefit corporation is the public) the grounds to sue if the company reneges on their mission, the same way shareholders can sue if a company fails to act in their interest.

PBCs are peak End of History liberal philanthropy that speak to the kind of person whose solution to any problem is "throw a startup at it"

Fukuyama wasn't wrong, he was just early

As in a true believer in our present day dystopia? I think chances are we'd evolve a few more neo variants of fascism at least a few times in-between some neo variants of liberal history-ending ones (I think abundance is next?) before the bombs drop and give us the rest.

Like Google's old motto, 'Do no evil!' :D

> 'Do no evil!'

“Don’t be evil”. But yes, this behavior made me think about Google too. Context: https://en.wikipedia.org/wiki/Don%27t_be_evil


>Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.

Could you describe the model that you think might work well?


It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.

That model already exists and has worked well for decades. It's called being a regular ass corporation.


I understand, but being a regular corporation is not the only possible model. Can you think of something better?

> being a regular corporation is not the only possible model

the point is that it _is_ the only possible model in our marvellous Friedmanian economic structure of shareholder primacy. When the only incentive is profit, if your company isn't maximising profit then it will lose to other companies who are. You can hope that the self-imposed ethics guardrails _are_ maximising profit because it the invisible hand of the market cares about that, but 1. it never really does (at scale) and 2. big influences (such as the DoD here) can sway that easily. So we're stuck with negative externalities because all that's incentivised is profit.


>the point is that it _is_ the only possible model in our marvellous Friedmanian economic structure of shareholder primacy. When the only incentive is profit, if your company isn't maximising profit then it will lose to other companies who are. You can hope that the self-imposed ethics guardrails _are_ maximising profit because it the invisible hand of the market cares about that, but 1. it never really does (at scale) and 2. big influences (such as the DoD here) can sway that easily. So we're stuck with negative externalities because all that's incentivised is profit.

I'm curious about your thinking on this subject, if you email me at the email on my profile I have some specific questions about your views on this matter.

We've already created a digital sovereign nation called State of Utopia which will be available at stateofutopia.com or stofut.com for short, our manifesto is here: https://claude.ai/public/artifacts/d6b35b81-0eeb-4e41-9628-5...

We have real services you can use immediately, such as this p2p phone/chat/video service without time limits (Zoom has a 1 hour meeting limit for free accounts) and no tracking: https://stateofutopia.com/instacall.html

Just yesterday we published a fitness tool proof of concept: https://stateofutopia.com/experiments/bodyfat/

We do believe that it is important to have market dynamics, and our model is for this state to own state-owned companies as well. Getting this model right is important to us and we would like to engage with you on this subject. We hope you'll email us to discuss your thoughts further.


I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism

Also, it seems to be the era where the government takes backdoor access to these services and data, as the did with social media

That's not what happened here. They literally got forced into it by the Pentagon. https://www.axios.com/2026/02/24/anthropic-pentagon-claude-h...

Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?

If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.

I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.


I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.

This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.


I think public benefit corporations (like Anthropic) are quite poorly defined so I'm not sure how successful a lawsuit is.

> Public benefit corporations in the AI space have become a farce at this point.

“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim anything they want about themselves, it’s only after you’ve had a chance to see them in the situations which test their words that you can confirm if they are what they said.


I presume in the beginning, many at OpenAI actually believed in the mission. Their good will simply was corrupted by the mountains of money on the table.

I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.

A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.

I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.

Because, Anthropic can do no wrong, correct?

With this new announcement, Anthropic is saying they can _specifically_ "do wrong" since it's in their best interests...

Yep I agree, I forgot to add /s to my comment

Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.

It would've been better PR for Anthropic to let Hegseth do that instead of fold at the slightest hint of pressure and lost contract money. I've canceled my Claude subscription over this (and made sure to let them know in the feedback).

He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.

The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.

I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.



Military academy boards have been purged and stacked with loyalists.

Hmm, that could be the best "IPO" they'll ever get. Better check if Trump Jr.'s 1789 capital has shares like they did in groq (note the "q").


Thank you :)

That said, your implementation will now inevitably become training data for the next iteration of the AI versions...

Gives me Google dropping "don't be evil" vibes, what could go wrong?

I've seen more than a few rewrite attempts fail throughout the years. However, I've never seen a direct language to language translation fail. I've done several of these personally: from perl to ruby, java to kotlin, etc.

Step 1 of any rewrite of a non-trivial codebase should should be parity. You can always refactor to make things more idiomatic in a later phase.


Do the "linuxisms" inherent in a compatibility shim like linuxlator get exposed to users in day to day application use?

I figured it'd be more like how proton provides windows APIs on Linux and applications "just work" as per normal.

I admire your purist approach, but most folks don't have that luxury and just need to make do with what works today for their tooling of choice (or more common, what their employer thrusts upon them.)


Compatibility layers can also introduce security bugs. One of the reasons why it was removed from OpenBSD.

BSD is more for purists anyway. Virtualization seems to be a better option than compatibility layers for the odd program that doesn't work natively.

Maybe that it's different for Windows API's on Linux, because by virtualizing Windows, you're still dealing with an unfree OS.


Theo de Raadt, 2010, on the removal of emulation: “we no longer focus on binary compatibility for executables from other operating systems. we live in a source code world.”

(Since then, OpenBSD has gained support for virtualization for some operating systems including Linux, through the vmm(4) hypervisor.)


> Linux: Win32 is the only stable Linux ABI :(

> OpenBSD: Whatever the C compiler produced last is the only OpenBSD ABI :D


Technically any software abstraction layer can have bugs (security or otherwise.) That doesn't mean we should abandon higher level abstractions.

If the bsd posix equivalent is the highest layer your willing to use, you'll miss out on some great software.


FreeBSD used to have compat libraries for old FreeBSD releases itself. With NetBSD it used to be the same.

OpenBSD, well, by policy running old, insecure binaries it's basically a no-no.


Imagine my surprise when I removed these in the kernel-conf, and then the 3C905 didn't work anymore, because it relied on these!

Personally, I love the "fiasco" of secondary add-ons to consoles across the generations, from the Cassette drive from Atarti 2600 to Sega CD and 32X.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: