HN2new | past | comments | ask | show | jobs | submit | nwellinghoff's commentslogin

This reflects a lack of technical understanding of the subject. The proposal is not enforceable, as there is no control over the end user’s networking stack. E.g. you can't actually rely on "3D printers that report themselves". The law makers need our help and some technical consultation.


The manufacturer can simply require that all prints go through their proprietary software, or better, cloud service for validation before the printer will accept them.

The person proposing this, and/or their staff certainly knows this.


It's okay, we just need a new tax to cover the mandatory sim chips that will need to be installed in compliant printers (which will not be permitted to be operated in metal buildings or basements, of course.)


Indeed very clever. I wonder if you framed this problem up with claude how it would “guide” you to solve this problem. Would be an interesting match up of ai vs human. Love the story!


The feat was done before Claude Agent which is why it was so challenging. Although I admit I am a heavy user now circa the past two weeks. We shall not discuss my Claude Code experience lest I have another mental breakdown at work and my employer has to send me home again. Let me put it this way. I have set up Claude dangerously skip permissions with Agent Teams, Fast Mode, and our automated e2e test suite I designed where it can see screenshots of every step and browser and API console logs. It is entirely hands off software development. I have had to think long and hard about my identity as a software engineer. So forgive me if for my passion project I don't let Claude do everything, lest I remember the decades I spent reading those textbooks on my shelf, and the fear that I will forget it all.


Nice. I like how you made it an easy to drop in proxy. Will definitely use this when debugging issues!


Yeah it does not make a whole lot of sense as the useful lifespan of the gpus in 4-6 years. Sooo what happens when you need to upgrade or repair?


This is a question that analysts don't even ask on earnings calls for companies with lowly earthbound datacenters full of the same GPUs.

The stock moves based on the same promise that's already unchecked without this new "in space" suffix:

We'll build datacenters using money we don't have yet, fill them with GPUs we haven't secured or even sourced, power them with infrastructure that can't be built in the promised time, and profit on their inference time over an ever-increasing (on paper) lifespan.


> This is a question that analysts don't even ask

On the contrary, data centers continue to pop up deploying thousands of GPUs specifically because the numbers work out.

The H100 launched at $30k GPU and rented for $2.50/hr. It's been 3 years since launch, the rent price is still around $2.50.

During these 3 years, it has brought in $65k in revenue.


They worked out because there was an excess of energy and water to handle it.

We will see how the maths works out given there is 19 GW shortage of power. 7 year lead time for Siemens power turbines, 3-5 years for transformers.

Raw commodities are shooting up, not enough education to cover nuclear and SMEs and the RoI is already underwater.


My cynical take is that it'll works out just fine for the data centers, but the neighbouring communities won't care for the constant rolling blackouts.


Okay but even in that case the hardware suffers significant under utilisation which massively hits RoI. (I think I read they only achieve 30% utilisation in this scenario)


Why would that be the case if we assume the grid prioritizes the data centers?


That is not a correct assumption. https://ig.ft.com/ai-power/

Reports in North Virginia and Texas are stating existing data centres are being capped 30% to prevent residential brownouts.


That article appears to be stuck behind a paywall, so I can't speak to it.

That's good for now, but considering the federal push to prevent states from creating AI regulations, and the overall technological oligopoly we have going on, I wonder if, in the near future, their energy requirements might get prioritized. Again, cynical. Possibly making up scenarios. I'm just concerned when more and more centers pop up in communities with less protections.


Beyond GPUs themselves, you also have other costs such as data centers, servers and networking, electricity, staff and interest payments.

I think building and operating data center infrastructure is a high risk, low margin business.


They can run these things at 100% utilization for 3 years straight? And not burn them out? That's impressive.


I don't see anything impressive here?


Not really. GPUs are stateless so your bounded lifetime regardless of how much you use them is the lifetime of the shitties capacitor on there (essentially). Modulo a design defect or manufacturing defect, I’d expect a usable lifetime of at least 10 years, well beyond the manufacturer’s desire to support the drivers for it (ie the sw should “fail” first).


The silicon itself does wear out. Dopant migration or something, I'm not an expert. Three years is probably too low but they do die. GPUs dying during training runs was a major engineering problem that had to be tackled to build LLMs.


> GPUs dying during training runs was a major engineering problem that had to be tackled to build LLMs.

The scale there is a little bit different. If you're training an LLM with 10,000 tightly-coupled GPUs where one failure could kill the entire job, then your mean time to failure drops by that factor of 10,000. What is a trivial risk in a single-GPU home setup would become a daily occurrence at that scale.


> the useful lifespan of the gpus in 4-6 years. Sooo what happens when you need to upgrade or repair?

Average life of starlink satellite is around 4-5 years


Starlink yes, at 480 km LEO. But the article says "put AI satellites into deep space". Also if you think about it, LEO orbits have dark periods so not great.

A better orbit might be Sun Synchronous (SSO) which is around 705 km, still not "deep space" but reachable for maintenance or short life deorbit if that's the plan. https://science.nasa.gov/earth/earth-observatory/catalog-of-...

And of course there are the LaGrange points which have no reason to deorbit, just keep using the old ones and adding newer.


damn. at this point its not even about a pretense for progress, just a fetish for a very dirty space


It's essentially a military network (which is why other power sphere want their own) and a way to feed money into spacex



Stop linking this same Wikipedia page if you're not going to expound it with further details or evidence. I'm holding you accountable for following HN guidelines here.


They re-enter and burn up entirely. Old starlinks don't stay in space.


They won't if you "put AI satellites into deep space", as the article says.


So they pollute the upper atmosphere instead!


Down voting doesn't make that statement any less true…


With zero energy cost it will run until it stops working or runs out of fuel, which I'm guessing is between 5-7 years.


5 to 7 months given they want 100kw Per ton and magical mystery sauce shielding is going to do shit all.


Same that happens with Starlink satellites that are obsolete or exhausted their fuel - they burn up in the atmosphere.


> Sooo what happens when you need to upgrade or repair?

The satellite deorbits and you launch the next one.


so, instead of recycling as many components as possible (a lot of these GPU have valuable resources inside) you simply burn them up.

I'm guessing the next argument in the chain will be that we can mine materials from asteroids and such?


Such a waste of resources


not to mention that radiation hardening of chips has a big impact on cost and performance


You could immersion cool them and get radiation resistance as a bonus.


Yes, because launching then immersed in something that will greatly increase the launch weight will help...


A "fully and rapidly reusable" Starship would bring the cost of launch down orders of magnitude, perhaps to a level where it makes sense to send up satellites to repair/refuel other satellites.


A lot of these accounts seem anecdotal. I have a clean copy of win 11 iot ltsc running on my laptop and it runs well. The desktop management, included hyper V, wsl2 and awesome RDP make it a great platform to get work done. Most problems people encounter with Windows have to do with driver maturity. And in the case of a mega corp managed machine its all the “security” bs the put on there that slows you down to a crawl.Once you get stable drivers; I find Windows 11,with wsl as my shell, to be quite nice.


Well yes, it is anecdotal. After all, it's my personal experience, which is, by definition, an anecdote. At what point did I suggest the exact types of bullshit Win11 exposes me to are exactly the same as everyone else experiences?


Man that tech was cool and did you a solid.


Many techs went to work for the phone companies for a reason.


You can achieve the same thing with electronic voting. Just because its electronic does not mean you do away with the “layers”


Are these assumptions wrong? If I 1) execute the ai as a isolated user. 2) behind a white list out and in firewall 3) on a overlay file mount

I am pretty much good to go from a it can’t do something I don’t want it to do?


Agreed. So much easier with self hosted runner. Just get out of your own way and do it. Use cases like caching etc also much more efficient on self hosted runner.


You could restrict the ssh port by ip as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: