HN2new | past | comments | ask | show | jobs | submitlogin

What if you gradually introduce more and more complicated games? Start with Pong, then PacMan, Super Mario World, etc etc until you get to today's games, which are far more complicated and more importantly, very realistic. Now that you AI is well trained, attach a camera to the computer and point the machine at the most complicated game of all: reality.


Good point: I don't think they've ever taken an agent which has been learning for a while in one environment and placed it in another. I don't know, but I would expect that the current version at least would have to fail hard, basically unlearning everything, before it started to succeed in the new environment. That's a weakness, if so.

How do humans avoid this? Because our brains are evolved to be plastic but not too plastic. We encounter new environments (like going from one level of Pac-Man to the next), but not new laws of physics (like going from Pac-Man to Mario).


The whole point is that the intelligence is "universal." By definition, we are talking about a thing which is able to transcend what it knows. So to me, the whole point is to introduce a variety of environments so that it learns the truly salient knowledge that is applicable across many games. And the most important of facts is this: stay alive. We might be able to bootstrap an ai up to a self-aware, self-preserving, rational agent which can adapt to any environment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: