HN2new | past | comments | ask | show | jobs | submit | aleph_minus_one's commentslogin

> Is there really a clear separation between tech companies and surveillance/military or is it wishful thinking?

The separation was never completely clear, but there was a time when the separation was much more marked.

The reason was simply that programming culture at that time was more "chaotic", "anti-authoritarian", "open-source"/"free software" (in the erstwhile understanding of this being just a part of a bigger movement, not in the verbal sense), "radical privacy" (cypherpunk), "hacking" (including the legally dubious aspects) etc.

These values were quite opposite to those of the military-industrial or surveillance-industrial complex, so there was a lot of friction between the cultures of the tech companies at that time and these complexes, which made it not particularly attractive for these sides to partner - if only because of the frictions between the sides that were to overcome.


> One of the applications I was really hoping for was for 3d printers to be able to, by themselves, do things you could ask a human to do. Insert components (like screws, nuts, nylon wire, ...) [...] while printing a 3d model and, you know, just make that work.

Prusa is working on a Pick & Place Toolhead for the Prusa XL that is a step in this direction:

> https://blog.prusa3d.com/xl-in-2026-new-toolheads-lower-pric...

"One Print, Multiple Components: Pick & Place Tool

Some technical prints require additional components, such as magnets, threaded inserts, or bearings, to be placed during the build. Without automation, this typically means you have to pause the print and insert the part(s) by hand. Although PrusaSlicer made this process easier a while ago, The Pick & Place toolhead can do it for you, completely autonomously. This reduces manual intervention and improves placement accuracy.

We’ve co-developed the toolhead with the Zurich University of Applied Sciences (ZHAW) and it’s designed for models that combine 3D-printed models with off-the-shelf components. We’re currently targeting late 2026 with its implementation."


> 1) No energy from grid. Can't use coal or fossil fuel energy sources. Must have plan to provide excess TO grid.

This is easy: the companies will simply build some nuclear power plants near to their data centers. Perhaps even nuclear power plants that are vibe-designed by their AIs. :-)


Faster and cheaper than nuclear power would be building a virtual power plant, adding big-ass banks of batteries charged during times of low demand or excess power capacity, and peak-shaving consumption when the rest of us need power.

This is an amendment I'd add to those rules to allow. There are other ways of storing energy but battery banks are the obvious one. Works well with shedding excess as well.

> I knew OpenAI was in trouble the instant they chose Altman over Ilya Sutskever.

I am not so sure:

This decision rather tells something important about the priorities of the string-pullers behind the curtain:

They clearly want(ed) to monetize what is there, with the risk that only smaller improvements for the AI models will happen from OpenAI, and thus OpenAI might get outcompeted by competitors who are capable of building and running a much better model.

If this is the priority (no matter whether you like or despise Sam Altman), you will likely prefer Sam Altman over Ilya Sutskever.

If, on the other hand, a fast monetization is less important than making further huge leaps towards much better AI models, you will, of course, strongly prefer Ilya Sutskever over Sam Altman.

Thus, I wouldn't say that choosing Sam Altman over Ilya Sutskever is a sign that OpenAI is in trouble, but a very strong sign where the string-pullers behind the curtain want OpenAI to be. Both Sam Altman and Ilya Sutskever are just marionettes for these string pullers. When they have served their role, they get put back into the box.


Yes I agree. Altman was the rational choice if you realise that eventually the huge R&D bill will need to stop for atleast a moderate period (<5 years).

You want to ride that out before capitalising on the eventual cheaper training costs once the rug has been pulled.

Altman has already succeeded here as it seems inference for API and chat is profitable but offset with massive R&D costs.


All your competitors benefit from your training costs. They’ll lose on inference pretty quickly if they stop training new models, no?

I don't think they will lose on inference because that assumes that compute becomes cheap for all evenly.

Their spending today has secured their compute for the near future.

If every GPU, stick for RAM and SSD is already paid for. Who can afford to sell cheap inference?

Z.ai is trying to deal with this by using domestic (basically Huwawei silicon not Nvidia). And with their state subsidy they will do well.

Anthropic has a 50bn USD plan to build data centres for 2026.

OpenAI similarly has secured extraordinary amounts of other people's money for data centres.

All these will be sunk costs and "other people's money" while money is easy to get hold off. But will be a moat when R&D ends.

Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI), and who you end up with (say people who use Gemini because they have a Google 2TB account).

Some upstart can put themselves into the ground borrowing compute and selling at a loss but the moment they catch up and need to raise prices everyone will simply leave.

ChatGPT is what is most likely to remain a sustained frontier model. Maybe Claude jumps ahead further a few times, Gemini will have its moment. But it'll all be a wash with ChatGPT tittering along as rarely the best. But never the worst.


> Once all the models become basically the same who you go with will be who you're already with (mostly OpenAI)

Imho, people are undervaluing the last mile connection to the customer.

The last Western megacorp to bootstrap its way there was Facebook, and control over cloud identity and data was much less centralized circa-late-00s.

The real clock OpenAI is running against is creating a durable consumer last-mile connection (killer app, device, etc).

"Easy to use chat app / coding tool" doesn't even begin to approach the durability of Microsoft, Apple, Google, or Meta. And without it, OpenAI risks any one of them pulling an Apple Maps at any time.

Unless it continually plows money into R&D to maintain the lead and doesn't pull an Intel and miss a beat.

Maybe they do, but that's a lot of coin flips that need to continually come up heads, in perpetuity.


> Kids shouldn't have tablets or smartphones or personal laptops before age 16.

If you make such a restriction, they'll secretly buy some cheap "unrestricted" device like some Raspberry Pi (just like earlier generations bought their secret "boob magazines").


Parents should have an allowlist of devices to be able to join their network. And then they can require root certs or something for access outside of a narrow allow list. There's a host of ways to solve both problems. Just remember to check for hardware keyloggers on your (the parents') devices, as kids could use them or try evil maid attacks, etc. if they feel totally encaged.

This will only work in practice if one of the parents is a network technician. :-)

I've said it before but prohibition works, if the goal is to reduce usage. I don't see this as a realistic problem.

> They are resourceful, sneaky and relentless.

... and honest:

- they will honestly tell you that they'd be very happy to see you dead when you impose restrictions upon them (people who are older will of course possibly get into legal trouble for such a statement)

- they will tell they they wish you'd never have given birth to them (or aborted them)

- they will tell you that since they never wanted to be born, they owe you nothing

- ...


Sounds like a kid in need of psychiatric help.

You barely ever had to deal with pubescent children? :-)

I raised kids. Never had to deal with anything like what is described. Sounds like someone read some questionable books on parenting, unfortunately followed the bad advice in those books and this is the result.

And this entire thing is about bad parenting. Its always easier to just give the kid a tablet and go back to whatever you were doing. Its always better to actually interact with the kid. That trade-off of time is important because if you mess up when they are young, you spend a lot more time handling issues later on. That time you gained by giving them a tablet will get payed back someday, usually with interest. That's what is happening here.


Please get the kids some help before we have to send you thoughts and prayers

I mean, that's really not normal puberty stuff, but... okay.

> If ID checks are fully anonymous (as many here propose when the topic comes up) then every kid will just have their friends’ older sibling ID verify their account one afternoon.

Exactly the same way that kids used in former days to get cigarettes or alcohol: simply ask a friend or a sibling.

By the way: the owners of the "well-known" beverage shops made their own rules, which were in some sense more strict, but in other ways less strict than the laws:

For example some small shop in Germany sold beverages with little alcohol to basically everybody who did not look suspicious, but was insanely strict on selling cigarettes: even if the buyer was sufficiently old (which was in doubt strictly checked), the owner made serious attempts to refuse selling cigarettes if he had the slightest suspicion that the cigarettes were actually bought for some younger person. In other words: if you attempted to buy cigarettes, you were treated like a suspect if the owner knew that you had younger friends (and the owner knew this very well).


> Most guardians would willingly do similar with locked devices.

Considering the echo chamber in which I was at school, my friends would have simply used some Raspberry Pi (or a similar device) to circumvent any restriction the parents imposed on the "normal" devices.

Oh yes: in my generation pupils

- were very knowledgeable in technology (much more than their parents and teachers) - at least the nerds who were actually interested in computers (if they hadn't been knowledgeable, they wouldn't have been capable of running DOS games),

- had a lot of time (no internet means lots of time and being very bored),

- were willing to invest this time into finding ways to circumvent technological restrictions imposed upon them (e.g. in the school network).


A fun fact about Grokipedia:

Many pages about math-related topics contain quite some "red text", i.e. parts that don't render properly, e.g.

> https://grokipedia.com/page/Derived_category (scroll down; for this page the mentioned phenomenon is particularly pronounced)

> https://grokipedia.com/page/K-theory

> https://grokipedia.com/page/Actuarial_notation

> https://grokipedia.com/page/Cobordism (scroll down to "Advanced Perspectives")


> I don't understand why people use Gmail. Just get a VPS and set up a SMTP server.

This would indeed be a good idea. The problem is that other email providers will often reject your emails (e.g. because they consider your emails to be spam or simply don't trust your server), so this idea is not easy to get to work.

So the next best solution is to use an email provider that is somewhat established (avoiding the mentioned problem), but is more trustable than Google.


I'm sorry you might have missed the part where I was being sarcastic.

Setting up your own SMTP server is actually literally a bad idea for the most part. Unless you want to debug your own mail server. Which, I promise you, 99% of users using gmail do not, and should not.


Handing your digital life to a claw bot that was vibe coded however, is a good idea?

Every single documentation and commentary about this says that it is not a good idea, but it does trigger the inspiration and imagination of people of what the future is going to be. The whole conversation around this comment is that people don't get why people are excited about this.

> Every single documentation and commentary about this says that it is not a good idea, but it does trigger the inspiration and imagination of people of what the future is going to be. The whole conversation around this comment is that people don't get why people are excited about this.

If you have 10 minutes, here is an example of how LLM tech can trigger inspiration and imagination of people of what the future is going to be:

https://www.citriniresearch.com/p/2028gic


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: