well people want to finish their work and go home, that's why
I know HNers don't like "surveillance everywhere", but...
if you're some law enforcement, every chance to get info means hours/days saved on your work... so you reach for the "easy-way": if you can get comms of a drug gang, you can identify who belongs to that gang (instead of risking their own life by actually 'joining' the gang)
But... some do cross their lines (eg watching comms of their ex, getting paid by political actors to listen over opponents, etc)
it's not like law enforcements are 100% bad guys, but things are "complicated"
It's mainly a problem of consequences and accountability. The people who suffer unjustly from unlawful surveillance and overreach are usually unable to do anything about it, and they are assumed to be criminals anyways so nobody cares. Punishments for violating the law are nearly nonexistent for "law enforcement", so a culture of impunity is formed that cannot be easily fixed. Anybody trying to enforce the rules would run into both corrupt and noncorrupt noncompliance, just like trying to get fast food workers to follow health and safety guidelines. It's probably impossible to reform and only a wholesale teardown and replacement without keeping anyone contaminated by the existing culture has a chance.
"OpenClaw" is a name from January 27, 2026. It's new enough that it's not in the training data for a lot of AI models. So they, quite literally, don't know what it refers to.
"If you don't know an identifier, google it" isn't a very reliable behavior in today's models. They do it, but only sometimes.
That's true, it could have been going from training data and skipping an explicit web search, but it was odd because I specifically asked it to pull references for my blog post, and it pulled ~20 links in the same message it said OpenClaw doesn't exist.
The current findings seem consistent with "both plaques and tangles are significant components of the pathology" and "our interventions are typically late and the accumulated neurological damage is already extreme by the time clinical symptoms show".
Attacking the plaques wasn't completely worthless - findings show that this often slows disease progression, especially in early cases. There are pre-symptomatic trials ongoing that may clear the air on whether "intervention is late" is the main culprit in treatment underperformance.
Not really. Anthropic has the "CBRN filter" on Opus series. It used to kill inquiries on anything that's remotely related to biotech. Seems to have gotten less aggressive lately?
I was reverse engineering a medical device back in 2025 and it was hard killing half my sessions.
Why not both? A pre-trained LLM has an awful lot of structure, and during SFT, we're still doing deep learning to teach it further. Innate structure doesn't preclude deep learning at all.
There's an entire line of work that goes "brain is trying to approximate backprop with local rules, poorly", with some interesting findings to back it.
Now, it seems unlikely that the brain has a single neat "loss function" that could account for all of learning behaviors across it. But that doesn't preclude deep learning either. If the brain's "loss" is an interplay of many local and global objectives of varying complexity, it can be still a deep learning system at its core. Still doing a form of gradient descent, with non-backpropagation credit assignment and all. Just not the kind of deep learning system any sane engineer would design.
Modern systems like Nano Banana 2 and ChatGPT Images 2.0 are very close to "just use Photoshop directly" in concept, if not in execution.
They seem to use an agentic LLM with image inputs and outputs to produce, verify, refine and compose visual artifacts. Those operations appear to be learned functions, however, not an external tool like Photoshop.
This allows for "variable depth" in practice. Composition uses previous images, which may have been generated from scratch, or from previous images.
Evolution is an optimization process. So if platonic representation hypothesis holds well enough, there might be some convergence between ML neural networks and evolved circuits and biases in biological neural networks.
I'm partial to the "evolved low k-complexity priors are nature's own pre-training" hypothesis of where the sample efficiency in biological brains comes from.
reply