Sadly, "finite time" is not what we need - they are looking for approximations that will be much rougher than the imprecision of finite numerical precision, and they want the approximation algorithm to have a good provable run time.
In general, taking theoretical approximation algorithms and shrinking the approximation factor to below some threshold where you "tell the difference" is a dead end. For instance, this algorithm's running time has a poly^{1/\epsilon} factor, so you pay a huge cost for it. It is better to think \epsilon = 1/3 or so.
In general, taking theoretical approximation algorithms and shrinking the approximation factor to below some threshold where you "tell the difference" is a dead end. For instance, this algorithm's running time has a poly^{1/\epsilon} factor, so you pay a huge cost for it. It is better to think \epsilon = 1/3 or so.