I can only comment on the one field where I am intimately familiar: computer vision. It is true that when you need a text description of the contents of an image, we have discarded feature-based approached. But attempts to change vision-based tracking, mapping, and navigation into a learned process have not performed well in the applications I have worked on. It's true that end to end control from raw images to output can do very well, but in most systems, the feature based approaches are still employed along side CNN for tracking. ML-only tracking is subject to a lot of noise because of its lack of good history, poor association, and sensitivity to outliers.
So, it's not discarded, it's supplanted by CNN as the primary signal, but our old tricks (reassociation, factor graphs, batch processing, even plain old homographies and MH-EKF) are still very much the scaffolding.
I expect it is the same in other sub fields mentioned - the main driver for improvement is no longer human directed knowledge-based algorithms, but rather human-designed, learning-based, heterogenous pipelines. Even the RAG or Tesla autopilot (probably) fits this bill nicely.
There will always be a set of problems beyond the current (in whatever year) computational limits of brute force, and we don't know how many of a humans capabilities are in that set.
The delta between a clever algorithm vs brute force in computational advancement could be 7 years or it could be 7,000 years.
Not really, no. That's motivated by not getting impractically small gradients on the plateaus and spoiling the optimization properties when used for deep ANNs.
The sigmoids it replaced had a bit more neuroscience inspiration, but so oversimplified it's just barely.
1. https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...