Nope, they're surprisingly hard to get ahold of. So I've resorted to being extremely noisy online.
Temporal accumulation: imagine you're observing a signal through a narrow window and you can only see a partial, noisy snapshot each time. "Temporal accumulation" is what happens when you let the observer remember previous snapshots and use them to improve its prediction of the next one. The persistence advantage P measures how much that memory helps, the difference in prediction error between an observer that accumulates across episodes and one that only uses the most recent snapshot.
For a black hole shadow (EHT), P ≈ 0: each snapshot already contains the full picture, memory adds nothing. For gravitational wave strain (LIGO), P is large and positive, the chirp evolves across snapshots, so memory is essential. The question the papers ask is: what determines how much memory helps? The answer turns out to be spectral entropy of the waveform, not mass.
I've been doing this for about a month. I also have wildly complicated ML pipelines working similarly in parallel. When Karpathy's 'autoresearch' came out I was surprised by how novel it was treated.
Horizontal space is still a premium regardless of monitor size when designing/building for responsive viewports. Vertical space is almost zero cost in terms of design constraints.
Even on large monitors you'd be surprised the number of people at 150% zoom with small windows opened instead of fullscreen.
I was a bigger fan of the certain doom in 2025, and I think the AI 2030 movement will have better design sense and storytelling. But really I haven’t seen anything that really has the oomph and fire of Tipper Gore’s crusade against youth music.
We need more showmanship, more dramatic catastrophizing. I feel like our current crop of doomers isn’t quite shameless enough to be really entertaining.
A significant thing to keep in mind for non-extinction doomerism is that individual experiences vary greatly. There may be a significant number of people or groups that really do experience what was predicted.
Similar to how the experiences of average rise in temperature (I would prefer if they had used the term "energy") differ greatly dependent on the region.
Also similar to "the country is doing well, look at the stick market and the GDP".
I think everybody who wants to have an actually serious discussion needs to invest a lot more effort to get tall those annoying "details", and be more specific.
That said, I think that "AI 2027" link looks like it's a movie script and not a prediction, so I'm not sure criticizing it as if it was something serious even makes sense - even if the authors should mean what they write at the start and themselves actually take it seriously.
100% agreed! We think about the industrial revolution and the rise of word processors and the Internet as social goods, but they were incredibly disruptive and painful to many, many people.
I think it’s possible to have empathy for people who are negatively affected without turning it into a “society is doomed!“ screed
People should understand that the reason this seemingly fan-fict blog post gets so much traction is because of lead author's August 2021 "fan-fict" blog post, "What 2026 Looks Like":
I can't help but notice that it doesn't matter what DeepCent does because OpenBrain will reach self awareness 6 months before them no matter what. Who needs a profitability plan when you're speedrunning the singularity.
Was hoping you'd appreciate our efforts to retain your original quirky vision. We named the rice ball Geoff as a homage to you (intentionally spelled the silly way). (https://warpstreamlabs.github.io/bento/docs/about)/
FWIW in the UK, Geoff is the usual spelling (from Geoffrey) of the name. Jeff (from Jeffrey) also exists in the UK, but is much rarer, even if it's the most common form in the US.
reply