I think you've hit on a weakness of the whole Machine Learning paradigm. I might get this wrong, but I believe every machine learning algorithm necessarily introduces some bias into selecting which possibilities to consider, and without some kind of bias, learning is impossible.
But once you've chosen how you will bias your model, you are only going to search for solutions in the space defined by that bias. So, figuring out the parameters of the ellipses describing the movement of heavenly bodies, but not questioning whether ellipses are a good choice to begin with. There is also feature selection, how you decide which aspects of reality (or measurements of reality, actually) are relevant to the learning problem. (There are feature selection techniques, but that presumes you already have a finite set of candidate features and then determine which ones have the most value.)
It seems that, perhaps, this kind of paradigm busting discovery is out of reach of current machine learning methods, and that the kinds of decisions about what to model and how to bias your model is where humans add value to the process.
This is all philosophical bullshit at this point, but I remain curious about the relationship between learning algorithms and scientific discovery. If anyone is still reading, are there any good books on this topic?
But once you've chosen how you will bias your model, you are only going to search for solutions in the space defined by that bias. So, figuring out the parameters of the ellipses describing the movement of heavenly bodies, but not questioning whether ellipses are a good choice to begin with. There is also feature selection, how you decide which aspects of reality (or measurements of reality, actually) are relevant to the learning problem. (There are feature selection techniques, but that presumes you already have a finite set of candidate features and then determine which ones have the most value.)
It seems that, perhaps, this kind of paradigm busting discovery is out of reach of current machine learning methods, and that the kinds of decisions about what to model and how to bias your model is where humans add value to the process.
This is all philosophical bullshit at this point, but I remain curious about the relationship between learning algorithms and scientific discovery. If anyone is still reading, are there any good books on this topic?