Are you sure there is even a philosophical difference there? I'm still going to use a program to do the heavy lifting of calculating the errors. I'm happy to argue that there is no philosophical difference; if both approaches are feasible and take comparable amount of time to implement then one gets the exact answer to the question at hand and the other is a poor man's shortcut for people who aren't confident in their maths skills.
The calculations should be arriving at the same numbers in the end, and implementing a simulation vs. an error propagation tree isn't going to favour the simulation for simple scenarios.
MCM is great for when doing usual error propagation is infeasible. It is sloppy when the error propagation could be done with a one or two analytic formula. Simulation isn't appropriate for problems that are easy analytically. For example, in practice, most of the examples in the article are not suitable problems for breaking out MCM (and although the author probably knows that, maybe not everyone on HN does). The Pi one might be a standard application, but even there Pi in particular is not a great choice for a showcase because Pi has some very nice analytic approximations.
I think the value is in checking assumptions - if the analytical approach assumes a normal distribution and then you do MCM and get a vastly different result then it is probably worth checking the assumptions made by the analytical model.
That is an inappropriate way of checking your assumptions. Assuming a normal is assuming a model (a model is a simplification of a situation by only considering the relative parts). Writing a simulation is also assuming a model, although the process is a lot less formal than traditional statistical regression.
If a modeler has assumed two different models of a situation and are getting vastly different results then that is evidence something is wrong, but it is also evidence that the modeler is out of their depth. It is not appropriate to fit two different models and then claim that the differences are delivering insight. The differences are revealing big gaps in the modelers understanding of key influences, rendering both models highly suspect. They should not be creating models, they should be putting more time into understanding the thing they are modeling.
There are times when a modeler would have two different models, but they should certainly not be surprised that they give different results. Indeed, they should be assuming two different models because they are going to give completely different results. There will probably be other edge cases, but in my experience they are rarer than people just making mistakes about where MCM is appropriate.
The calculations should be arriving at the same numbers in the end, and implementing a simulation vs. an error propagation tree isn't going to favour the simulation for simple scenarios.
MCM is great for when doing usual error propagation is infeasible. It is sloppy when the error propagation could be done with a one or two analytic formula. Simulation isn't appropriate for problems that are easy analytically. For example, in practice, most of the examples in the article are not suitable problems for breaking out MCM (and although the author probably knows that, maybe not everyone on HN does). The Pi one might be a standard application, but even there Pi in particular is not a great choice for a showcase because Pi has some very nice analytic approximations.