HN2new | past | comments | ask | show | jobs | submitlogin

Copied from end: In summary: • Keep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests. • Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value. • Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a system test — context is everything. • Design a test with more care than you design the code. • Turn most unit tests into assertions. • Throw away tests that haven’t failed in a year. • Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth • If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding coverage or other meaningless metrics can lead to rapid architecture decay. • Be humble about what tests can achieve. Tests don’t improve quality: developers do.


• Keep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests.

• Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value.

• Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a system test — context is everything.

• Design a test with more care than you design the code.

• Turn most unit tests into assertions.

• Throw away tests that haven’t failed in a year.

• Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth.

• If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding coverage or other meaningless metrics can lead to rapid architecture decay.

• Be humble about what tests can achieve. Tests don’t improve quality: developers do.


This set of takeaways seems quite solid to me.

In particular, I've pushed the "remove tests that haven't failed in years" before, but never had any traction. Are there teams that actually do this?


I really hope not. That would be kind of like saying "this pre-flight checklist hasn't failed for years, so let's just skip them before we take off."


Ha! I always assume some good faith in the judgement of the people that would be removing things. Didn't realize that would be withheld here.

So, in short, no. It is not just a time bound thing. However, if you have not had a test fail in years, it is a good time to audit the test to make sure it is still relevant. If it isn't, get rid of it.

It is this kind of logic that insists on the pre flight video that everyone ignores. We can, and should, do better.


Great analogy!


You should remove tests that are not testing the unit in question ( or move them to another unit).

The article mentions testing X=5. Is not worth while testing.

It depends on your unit. If the unit is calculator class, then testing this simple function is may be worth testing. If the class is highly level biz rule it may not add any value. If this test is testing an parser parsing X=5 as example for X=<value> sure. You must test this trivial case as well.

Think about unittests as logs of debug sessions for this particular class that you would produce to prove a certain point about your code.

Keep them simple. Easy to understand. If they fail the failure should explain the failure. You need to much time analysing the failure of your unittest. Redesign them and/or your code to explain failure better.

I think the author has probably seen spaghetti unittests to many times - I.e. unittests that don't follow a purpose - just trying to hit code coverage without the intention to prove a rule about the code.

also, the advise adjusting code for unittest destroys your architecture is completely wrong. The better your code supports your tests the better the architecture will be. it will be more flexible and has already two use case. The real biz case and the case of unittest. This makes your architecture more robust against change.


I would expand it to not just the author. I'd like to think most of us have seen enough spaghetti code to have this impression.

I do think adjusting your code for the unit tests can destroy the architecture. However, I don't think this is necessarily bad. Consider all of the "dual purposes" that pretty much every single piece of a rocket serve. The catch, is most of us are not building a rocket; so a forcing function for single responsibility is mostly a good thing.


I don't follow this logic. OK, so you removed tests "that haven't failed in years". Then, in a while, you decide to refactor code that those tests were covering == foot, meet shotgun.


I did not mean literally "hasn't failed in years? deleted."

Instead, lean on the judgement of the people working the system. If it hasn't failed in years, it should be audited to see if it can fail. And if not, removed/revised. Obviously at the discretion of the entire team. (This would be a code reviewed change, after all.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: