One neat thing about R is that it's become standard in academic statistics to include an R implementation of your new idea with a journal paper. For example, Gareth James and his colleagues came up with a new method called the Gauss-Dantzig estimator for doing prediction in the case where the number of parameters is much larger than the number of data points. You can download the R code from his research web page here:
http://www-rcf.usc.edu/~gareth/research/
This makes it much, much easier to try out new prediction methods on your own data. No more having to write code from the paper's description and hoping (praying) that you got it not entirely wrong! Instead you can use the researcher's own code to quickly figure out if the new method is better or worse than previous methods on your own data.
That being said, R does take a lot of getting used to. Graphics in general are tricky, although the ggplot2 package makes some things easier and can produce pretty results:
http://had.co.nz/ggplot2/
There also isn't a great story for using R on massive data sets which don't fit in main memory, so far as I know. It doesn't take much before you start hitting data for which an algorithm that requires O(n^2) memory will eat > 15 GB of RAM. At that point you're out of the territory of Amazon instances you can rent cheaply and into building a box just for R, or you're into refactoring your data so you can do the computation in pieces. So you do have to watch out for that a bit when using the default packages.
This makes it much, much easier to try out new prediction methods on your own data. No more having to write code from the paper's description and hoping (praying) that you got it not entirely wrong! Instead you can use the researcher's own code to quickly figure out if the new method is better or worse than previous methods on your own data.
That being said, R does take a lot of getting used to. Graphics in general are tricky, although the ggplot2 package makes some things easier and can produce pretty results: http://had.co.nz/ggplot2/
There also isn't a great story for using R on massive data sets which don't fit in main memory, so far as I know. It doesn't take much before you start hitting data for which an algorithm that requires O(n^2) memory will eat > 15 GB of RAM. At that point you're out of the territory of Amazon instances you can rent cheaply and into building a box just for R, or you're into refactoring your data so you can do the computation in pieces. So you do have to watch out for that a bit when using the default packages.