HN2new | past | comments | ask | show | jobs | submitlogin

The subject was 'ability to optimize' and you have changed it to 'how optimized the framework is already'. Most of one's ability to optimize is not dictated by anything about the framework.

Benchmarks which artificially exclude common optimization techniques are avoiding the interesting question of room for optimization in favor of meaningless pissing matches about 'language speed' or 'framework speed'.



Pekk, I appreciate the response, but I don't see it that way. I see an efficient framework as getting out of the way and not consuming too many CPU cycles for its plumbing business, leaving my application with room for me to code quickly and inefficiently then optimize when the time comes.

If I build on an efficient/high-performance framework, I can make sloppy code to start and get the job done fast. When the time comes to optimize, I have a great deal of headroom available. I see this as "ability to optimize."

If on the other hand, the framework (and, as importantly, the platform) is already consuming a great deal of the CPU's clock cycles for things outside of my control, my ability to optimize is greatly diminished. I will run into a brick wall unwittingly erected by the framework and its platform.

If my ability to optimize becomes a matter of replacing core components of the framework with marginally faster alternatives, the experience devolves into a frustrating guessing game wrought with arcane insider's knowledge ("Everyone knows you don't use the standard JSON serializer!") and meandering futile experimentation ("I wonder if using a while loop rather than a for loop would squeeze this request down to 200ms?")

I'd rather know that the framework is designed to give as many as possible of the CPU's cycles to me and my application. I can then be reckless at first and careful when time permits.

Which benchmarks artificially exclude common optimization techniques? If you're referring to ours, please tell us what is on your mind. You brought this complaint up in the comments on our most recent round [1], but didn't follow up to let us know what we did wrong. We are absolutely not interested in artificially excluding common optimization techniques. In fact, we want the benchmarks to be as representative of common production deployments as possible.

[1] https://hackernews.hn/item?id=5590161




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: