Hacker News .hnnew | past | comments | ask | show | jobs | submit | aaroninsf's commentslogin

If realized, ...

[side-eye-sock-puppet-monkey.gif]


I read the pre-publishing version of this paper, and there was then and still is a serious problem with their logic, consistent with if not bad faith, something akin to it:

Assume for a moment their core hypothesis is correct, there were transient objects captured on film pre-Sputnik in LEO objects.

What might we say about their nature?

The authors' undisguised implication is "it's aliens" to be blunt; that's their motivation for this work.

Consequently they put effort (which may not be noted in the final published papers...) into the question of whether they could make any meaningful inference about the geometry and spectral properties of their "transients," their interest (of course) was that if they could make a meaningful argument for regular geometry, they had the story of the century in effect.

These efforts failed totally.

A natural inference might be, among the reasons this might be, is that the objects (remember we are assuming they exist) do not have such characteristics. The primary reason that would be true is if they were naturally occurring objects.

I looked this up and was surprised to learn that there are currently estimated to be on the order of a million small objects in the inner solar system.

So: the entire hypothesis hinges on "significant correlation with nuclear testing." Because otherwise, once can reasonably assume that transient traces of objects—when they are actually traces of objects—would in a quotidian way presumably be caused by some of these million objects.

Or so say I.

There is no end of peculiar and provacative history and data in UFOlogy, and even more murk; one needs to tread very carefully to not go down (or, be led down) to false conclusions, disinformation, and the like.

The authors of this paper seem singularly disinterested in that caution.


Assuming what you say is true then couldn't that be validated by making additional observations in the present day? Since we'd assume some sort of statistical distribution for such objects. Is there any reason that would be unrealistic?

That was the era of above ground testing. Is it possible that some of these tests kicked pieces of metal into LEO? Though I suppose that those orbits would see streaks, not point sources, in the photographs when you have an hour exposure.

If you want to downvote, I invite an alternate explanation for their behavior and the contextualizing media posture,

which regularly situates what they are willing to say in print, within unsupported and click-bait-worthy speculation.

Another example of bad faith: curve-fitting around what constitutes "nuclear testing."


The wildcard for our civilization that I pay a lot of attention to:

will AI help us get through blockers like this?

I'm out of the prediction business but my guess is: absolutely, but iff we don't collapse in some way first.

Wild to be alive as the centuries-long horse race of industrialization between doom, or the stars, approaches its finish line.


How would AI help achieve commercial fusion? You first need to identify the blockers. These almost all entirely boil down to "how do we precision machine large pieces of hard metal?", "how do we assemble facilities with untold process channels?", "how do we capture neutrons without making a prohibitively massive machine?", and "how do we make metal that doesn't melt?".

Now, AI might have a chance at supercharging material research and making miracle materials that help address the blanket and first wall challenges, but honestly those are roadblocks we're not even running into yet. AI can not and will not fix issues related to organizing labor and supply chains and suddenly make megaprojects have a 100% success rate for on-time and on-budget. It's just not going to happen.

So are these problems intractable? Of course not. It's just not what the chatbot is well suited for. Anyone saying otherwise is selling something.


Fwiw, machine learning is already being used for plasma simulations.

The race between doom and the stars is something I think about quite a bit.


This is the science fiction fan's version of hopes and prayers.

This is a fascinating variation on the forest/trees, and false dichotomy.

The AI "doomerism" taken up in this piece is one we see replicated a lot, it offers up a scarecrow: that the new risks to our civilization worth talking about, require AGI, agents, even ASI.

Cory should know better. He nearly gets there, recognizing that the corporation represents an entity with agency that is misaligned.

But he somehow elides past that fact that AI is plenty capable of doing meaningful and novel harm, and may be capable of existential harm, already, as it is—both absent AGI/ASI, and, in ways which are genuinely novel and against which we consequently have no good defenses: as individuals, as societies, as a civilization.

Incremental AI is at heart "just" the latest force-and-effort multiplier.

But it is an exponential multiplier; and it is applicable in domains which have not been subject top such leverage before.

Examples are not at all scarce and some are already well known, e.g. the specific risks from the intersection of AI and "biohacking" and other kinds of computational biology.

I'm a fan, but Cory, pal, you're slipping into something that looks a bit like intellectual laziness and polemics here and not to evidence thinking through the shape of the problem.

We can be at risk both from the novel applications and leverage of AI; and from their oligarchic kakistocratic owners. It's yes-and.

(And, by the way—we can also again be genuinely at risk from agents, something that quacks like AGI, and may quack like ASI: we don't know what that is yet. All of these must be tracked. It's not an OR.)


I've been using 4.6 in a long-term development project every day for weeks.

4.7 is a clusterf--k and train wreck.


Hot take:

I assume the author wrote this with the expectation that much of the readsherp gasp, and react with "the natural horror all right thinking folk would have in response to violence of any kind."

Sorry, lol, no.

The appropriate question for "all right thinking" folk is very different: if argumentation has no impact and it's obvious that it shall have none—what other avenue do you expect opponents, who take the risks seriously, to take...?

That's not a rhetorical question.

To put it bluntly: the machinery of contemporary capitalism, especially as practiced by our industry, very clearly leaves no avenue.

How many days ago was Ronan Farrow here doing an AMA on his critique of Altman—whose connection to this specific community is I assume common knowledge...?

How many of you have carried, or worked beneath, the banner, move fast and break things...?

What message does that ethos convey, about their the extent to which "tech" is going respect community standards, regulation—the law?

And on the other edge: what does this ethos enshrine about how best to accomplish one's aims?

One of the bigger domestic stories this past week which has inflamed a certain side of Reddit, is the "disgruntled employee torches warehouse" one.

Consider also—and I'm deadly serious—the broader frame narrative we are all laboring within today: that the new contract of the capitalist class—including and perhaps especially those in "tech," e.g. in the Peter Thiel circles—seems very much to be, "social stability via surveillance and a police state, rather than through equity and discourse."

When code is law, the law is buggy.

When there is no recourse through the law, you get violence.


Telling to be down voted.

Hard truths are hard.

Cliché though it may be, with great power comes great responsibility.

Tech culture, as epitomized by the general readership of this site, have to great degree, if of course not uniformly, abdicated that responsibility;

and the general leadership of the industry and as epitomized by the culture Y Combinator has helped build, are much more uniform in this abdication.

There is neither political nor moral mystery here; there is just denialism, ignorance, and avoidance.

If you feel attacked by these accusations, as always it's a fine time to ask why.

What accommodations have you made...? What expedience have you allowed? What do you look away from or shy from reasoning through about the impact of the technologies you work on, the behavior of your employer?

That everyone is complicit is not a defense; it's an indictment.

The violence of present concern is a wholely natural and entirely predictable response to the diffused violence our industry and capitalism generally has performed against a humanity, not to mention to the biosphere.

Violence will continue, and if you think it is indefensible—provide then some alternative mechanism for steering resources and allocation of power.

We, collectively, constructed the tensions which are now resolving in exactly the manner one would expect, if only one were remotely familiar with history or indeed human nature.

Those in tech who declined to study or understand the humanities may be surprised to learn that the forces at work have well understood and inspected patterns. Ignorance is no excuse.


Yeah.

This was terrible branding, and is terrible branding.

The clash between "Earendil" and "Pi" is so overdetermined it might have required earnest effort.


I love the spell of cope in the morning.


<Me, wearing my Casio watch that I found on our hill.>


I read this.

It's got some provocative ideas, which Stephen foregrounds.

It's got a great hook, and like most writing incubated under circumstances like this, it leans hard into polished sharp introduction into a well-considered world with a very specific flavor.

It's also—no better way to put it—crappy as a novel.

It's not because the author can't string sentences together.

It's because that's not what makes a novel function as a novel.

Epic opening and premise establishment: 10/10

Nice "plot twist", predictable in its inevitability if not its specifics; conforms to genre: 7/10

Narrative arc: 2/10

Ability to sustain meaningful tension and interest while working through the de rigeur mechanics of filling hundreds of pages: 1/10

I get that there is a new readership with different expectations and styles of reading. (Looking at you tiktok; looking at you Dungeon Crawler Carl; looking at most successful YA fiction especially that which gets SPICEY and is released in 8-book series with a new volume every 11 months)

If you're silverback and relish long-form fiction as previously conceived: set expectations accordingly.


I am a "silverback" and have read all of the classics of the SciFi genre and I loved this novel. An unconventional topic like this isn't going to fit all of the norms of writing. I thought it was well written and I love his dialog. I'm looking forward to future work.


Yeah, it's trying to cohere the structure of the book with the topic matter which I really appreciate. It doesn't always quite land, but I think it was really worthwhile. Although I can understand how someone who is looking for a "normal" novel might be dissatisfied. But to me it's a bit like house of leaves, you need to accept the meta-conceit of the book being subject to the effect of its contents.


As someone who has a low opinion of House of Leaves,

and was e.g. entirely immune to the charms of Twin Peaks,

I believe you're right.

But even then... once this devolved into what felt like a teeth-clenching march to the Final Battle, on the basis AFAI can tell that this is what the author understood Novels Must Do,

it wasn't even providing the pleasures you get from just floating along.

It was just a grind.

I can't take Adrian Tchaikovsky either...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: