They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).
It seems more likely that there were only a handful of people who could. There still are a handful of people who can, and they are probably even better than in the olden times [1] (for example because there are simply more people now than back then.)
Yes, there is some actual technique to learn and then with moderate practice it's possible to accurately memorize surprisingly long passages, especially if they have any consistent structure. Reasonable enough to guess that this is a normally distributed skill, talent, domain of expertise.
Used to be, Tony Soprano could afford a mansion in New Jersey, buy furs for his wife, and eat out at the strip club for lunch every day, all on a single income as a waste management specialist.
Brains are adaptive. We're not getting dumber, we're just adapting to a new environment. Just because they're less fit for other environments doesn't make it worse.
As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.
Weak is relative. All humans are weak compared to an elephant and strong compared to a mouse. If strength stops being a competitive advantage in humans then weakness isn't a signal that determines outcomes.
Brains are adaptive and as we adapt we are turning more cognitive unbalanced. We're absorbing potentially bias information at a faster rate. GPT can give you information of X in seconds. Have you thought about it? Is that information correct? Information can easily be adapted to sound real while masking the real as false.
Launching a search engine and searching may spew incorrectness but it made you make judgement, think. You could have two different opinions one underneath each other; you saw both sides of the coin.
We are no longer critical thinking. We are taking information at face value, marking it as correct and not questioning is it afterwards.
The ability to evaluate critically and rationally is what's decaying. Who opens an physical encyclopedia nowadays? That itself requires resources, effort and time. Add in life complexity; that doesn't help us in evaluating and rejecting consumption of false information. The Wall-E view isn't wrong.
I see a lot of people grinding and hustling in a way that would have crushed people 75 years ago. I don't think our lack of desire to crack an encyclopedia for a fact rather than rely on AI to serve up a probably right answer is down to laziness, we just have bigger fish to fry.
Please provide evidence that masses of people ever were critically thinking across general fields they were not involved in.
Everyone seems to take for face value there was a golden age of critical thinking done by the masses is at some time in the indeterminate past, but regardless of when you ask this question, the answer is always "in the past".
I surmise your thesis is incorrect and supplant this one instead.
The average person can only apply critical thinking on a very limited amount of information, and typically on topics they deal with that have a quick feedback loop of consequences.
Deep critical thinkers across vast topics are rare, and have always been rare. There are likely far more of them than ever now, but this falls into the next point
Information and complexity are exploding, the amount of data required to navigate the world we now live in is far larger than just a few generations ago. Couple this with the amount of information being presented to individuals and you run into actual physics constraints on the amount of information the human brain can distil into a useful model.
By (monetary) necessity people have become deep specialists in limited topics, analogies and paradigms don't necessarily work across different topics. For example, understanding code very well has very little bearing on if I grok the reality of practiced political sociology, and my idea of what is critical thinking around it is very likely to have a very large prediction mismatch to what actually happens.
> Who opens an physical encyclopedia nowadays?
I know plenty of people who binge wikipedia and learn new things through that. While Wikipedia is not always perfect, it's not like older printed encyclopaedia like Britannica were perfect either.
You have a point with trusting AI, but I'm starting to see people around me realising that LLMs tend to be overconfident even when wrong and verifying the source instead of just trusting. That's the way I use something like perplexity, I use it as an improved search engines and then tend to visit the sources it lists.
They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.
We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.
And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.
We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox
Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).