Hacker News .hnnew | past | comments | ask | show | jobs | submit | technicalbard's commentslogin

I used elm before pine, on a terminal hooked to a Bull-Honeywell machine running Multics...


Except it won't work. The intermittency problem of renewables doesn't make them cheaper - there is no low-cost solution to that. The intermittency makes energy more expensive. Further, their population is already shrinking. Labor costs will start to rise quite rapidly in Chain, making things more expensive.

Renewable penetration hasn't made energy cheaper anywhere where it is a significant portion of the supply grid.


The very first nuclear reactor went from idea to operating in 2 years. In the middle of WW2.

The only reason nuclear projects take longer than other industrial projects is regulatory capture.


The wartime Hanford reactors did not produce any electricity. They had no safety containment systems. And they released cooling water containing radioactive contamination directly into the Columbia River:

https://www.osti.gov/opennet/manhattan-project-history/Proce...

Aside from "possible catastrophes" involving control rod failures, enemy bombing, or sabotage, DuPont's greatest concerns about reactor operations involved pumping Columbia River water through the reactors for cooling purposes. Using a "once-through" system, water was taken from the river and chemically treated before passing through the core of the reactor at the rate of 75,000 gallons per minute and being released back into the river. The reactor used canned fuel slugs, which prevented radioactive fuel from getting into the coolant as long as the cans did not accidentally rupture. Even if no fuel escaped, however, there was induced radioactivity of the impurities and treatment chemicals within the water. Radioactivity thus inevitably entered the Columbia. To alleviate possible dangers, DuPont engineers diverted the heated and "somewhat radioactive" exit water, containing mostly, but not exclusively, short-lived radionuclides, as well as certain chemical contaminants, to a 12,000,000 gallon retention basin. Each basin was divided in two, with the two sides operating in parallel. Monitoring for radioactivity occurred at the inlet to the basin, at an intermediate point, and at the exit. After six hours, radioactivity diminished by a factor of about twenty. With radioactivity below the then tolerance dose for complete immersion, the water was further diluted with wastewater free from radioactivity and returned via underground pipes for release in mid-river.


During WW2 the US could build a ship in 39 days. Now it takes 10 years.


I know this point is made a lot but I’m not sure it’s that meaningful.

WW2 boats were not the enormously complex network and sensor systems that today’s are. While, yeah, 10 years is extreme, but I don’t think 39 days is physically possible.


The 39 day boats built in WW2 were probably more similar in complexity to a modern boat than a WW2 reactor is to a modern reactor.


It was Liberty ships—slow, high-capacity, not especially durable cargo ships built as fast as possible, assembly-line style. The median construction time only got that low after the assembly line really got going and they were able to work out some kinks. It was over 200 days early on.

Warships—none of the capitals ships, certainly—weren't built that fast. Not even the cheap-as-hell escort carriers we built tons and tons of, I don't think. Destroyers and little corvettes and other smalls screen and utility ships, maybe some of those weren't too far from 39-day construction times.


I would be surprised if some of the systems were actually harder to work on back then. Old battleships always seemed to have insane amounts of wiring and cabling, because they used the nightmarish one signal per wire protocol in the pre digital age.


> because they used the nightmarish one signal per wire protocol in the pre digital age.

Can you say more about this? I find that kind of thing fascinating.


https://softsolder.com/2017/11/05/battleship-wiring/

They presumably used to have to run an individual wire for every. single. signal, or at most, they'd have extremely simple multiplexing.

I have no idea what these cables do, but I'm glad I'm not the one repairing it. Not only is there physically a lot of wire, 90% of it looks to be going directly to something mechanical.

I don't know how modern ships do it, but with civilian digital logic , you'd probably run a few redundant ethernet(Or fiber, or CAN) links to some kind of fan out box right near the switches or motors.

Cars these days are very extreme about it, sometimes even running the entertainment system commands over the fairly critical CAN bus for some reason.

If one of those cables went bad, spanning tree protocol would save you till you fix it, and probably tell you which cable broke and maybe even where. Mix them up, no problem, firmware knows what to do.

Plus, everything goes through a computer, so you've got logging and all kinds of stuff to help spot issues, rather than the classic "Oh well, noise and intermittent connections, we live with it till it gets bad enough to be all the time".

The only rats nest is right from the mechanical switch to your PLCish thing, but that's short enough to trace by eye.

You might have zero concern about interference in cables, because they might be fiber.

You won't have complicated color code schemes or wiring diagrams, or cut off wires that nobody knows where they go to anymore, you can just test it, because you don't have 50 thousand things to sift through.

People argue about whether digital or analog is more reliable, but digital is definitely easier from a hardware perspective, the hard stuff is done in factories by robots.

I'm sure the technicians had special training and it was a lot better than average civilian industrial uses of the pile o wires system, but I would think it would still be time consuming and not paralellizable, only so many people can fit around one of those giant bundles at once.

I could also be vastly misinterpreting how modern ships are wired, I know some industrial setups today insist on home run wiring instead of condensing to digital.


That picture makes “cable management” take on a whole new meaning, my goodness.


And you think the safety record of those ships was the same as today?


A hell of a lot safer than a WW2 reactor!


Well, and that the nuclear reactors back then commonly had terrible accidents and after Chernobyl (and TMI in a lessor manner) we realized that you can really screw up the planet with a nuclear oopsie whoopsie.


The planet? No, you can’t.


Or because covering half of Europe to the level that in some places you, even to this day, have to measure the radiation levels of mushroom and wild game is irrelevant?

https://www.researchgate.net/figure/Distribution-of-radiatio...


Wrong. Storage has opex, because it has energy losses. Not all the energy that goes in can come out. 2nd Law and all that. On top of that, there will be maintenance and operating staff costs.


Wind would need to guarantee dispatchability on demand to fix a price. Offering low prices when it can is not the same as reliable on-demand sources.


This is true of all knowledge-work. Traditional engineering is fraught with the same problem - you don't know EXACTLY how you will solve the challenges, so how can one accurately estimate them? Even worse - if you don't have a clear set of requirements, or they are expected to change as you progress (a critical feature of AGILE), then estimating with any likelihood of achieving same becomes all but impossible.

The more uncertainty in the path, the less accuracy in the estimate. Kahnemann's latest book "Noise" provides some good background on why this happens.

Having multiple people do independent estimates and averaging them probably gives better results, and having a clear process to document assumptions and test sensitivity to those estimates can also help.


So they are building LISP inside Excel.... It is now a corollary of Greenspun's Tenth Rule of Programming...


If the Vikings had brought the beads to Greenland or Labrador and trader with the Inuit, the Inuit could have traded those beads across the arctic of Canada in a couple of hundred years - EASILY. Remember that the Inuit of North America only entered from Asia in the last 4000 years or so, and speak a language that is closely related to languages in Siberia...

So these beads could have come to Alaska from east OR west. We will probably never know for sure.


Alaska was peopled in the Holocene (>10k years ago). But yeah, i think you’re right: I tend to think of the particularly coastal Arctic people’s across AK/Canada as having more consistent inter-tribal contact and periodic aggregation than between Arctic and more Southerly tribes. Not crazy to think trade goods could flow between the indigenous people of the east and west coasts of the Arctic, albeit slowly. I think far, far more likely that this item would have come to Alaska via Siberia though, if for no other reason than proximity. Fascinating to think of the story of that bead. Blows the mind.


I'm not so sure about the "easily" part: from [1], most part of Greenland was colonized after Vikings settled on Iceland. So I'd imagine it wasn't exactly a leisurely trip from there to Alaska.

[1] https://en.wikipedia.org/wiki/Thule_people


It would have been a series of trade network hops, not an express route by one guy. So traded from one village to the next over and over westwards. Going from Venice over the Silk Road and up through Siberia and into Alaska, or up through Eastern Europe into northern Asia and across Siberia, probably would have involved fewer exchanges though, I’d guess. Either way, that bead is a looong way from Venice. :D


Umm. Totally misses that the LISP family of languages predates both C and ML and introduced many of the features he cares about...


I'm an amateur developer and I write checking code in functions all the time because you can't be certain of the source of the data!!!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: