> AI is going to be for the next 50 (or 100, or probably 200) years.
To be clear about that "probably 200", are you saying that you believe we'll need trillions of times the processing power of the human brain in order to crack how it works, or that you believe that we've nearly reached the end of increases in processing power, for at least the next 200 years?
If I had to guess, he's saying that at the current rate of software and research progress, we won't be able to cobble together the weak and specialized subsystems that we currently call "AI" into anything more interesting for quite some time. Not an altogether uncommon belief, esp. amongst those that specialize in robotics or other practical AI applications, because they know first-hand how hard it is to create humanlike behavior with any of the techniques we know of today.
But self improving AI is not remotely predictable based on our current progress, and really, it's not even the same field as what we call AI today: extrapolating our current progress to predict where we'll be in 50 years is like asking a bombmaker from 1935 to look at a log log plot of historical explosive power in bombs to try to predict what the maximum yield from a bomb in 1950 would be. It doesn't matter how slow the mainstream research is if someone finds a chain reaction to exploit, and it's impossible to predict when someone will successfully exploit that chain reaction.
IMO there's very good reason to believe that we're already deep into the "yellow zone" of danger here, where we have more than enough computational power to set off a self-improving chain reaction, though we don't actually know how to write that software. What we really have to worry about is that as time goes by, we creep closer to the "red zone", where we don't even need to know how to write the software because any idiot with an expensive computer can brute force the search through program space (more realistically, they would rely on evolutionary or other types of relatively unguided methods). That's exceptionally dangerous because the vast majority of self improving AIs will be hostile, and we want to make sure that the first to emerge is benevolent.
So yes, there's a lot of uncertainty here, but I think it's a mistake to say that we don't need to worry about it until it's here. By the time it's inevitable and the mainstream has started to even accept it as possible, it's probably going to be impossible to ensure that (for instance) some irresponsible government won't be the first to achieve it merely by throwing a lot of funding at the problem and doing it unsafely.
To be clear about that "probably 200", are you saying that you believe we'll need trillions of times the processing power of the human brain in order to crack how it works, or that you believe that we've nearly reached the end of increases in processing power, for at least the next 200 years?