HN2new | past | comments | ask | show | jobs | submitlogin

I think its important to note that the title says most, and doesn't say 100% of the time unit tests are bad. It specifically mentions using them for critical functions/algorithms, or 'units of computation'.

But, he outlines some of the less useful parts about it, common mistakes, and my takeaway from the article really hits spot on with some poor experiences I have had with teams that get lost in unit testing, and it really can lower the code quality and simultaneously the velocity if not approached carefully.

Some key quotes from the article:

"If you find your testers splitting up functions to support the testing process, you’re destroying your system architecture and code comprehension along with it. Test at a coarser level of granularity."

This is all too common. Especially with inexperienced developers. Management requires 40, 60, 80, or 100% test coverage blindly, without thinking about whether it makes sense to test that particular part of the code, and furthermore doesn't take into account readability, or in my experience the pain of over-abstracting something. Little is worse when trying to debug a program, and dealing with over abstraction hell to the point where all you can read is the tests, and trying to read the source code has become entirely over complicated, all in the name of keeping the code in a format where its testable, at the cost of it being understandable.

Developers that have a lot of experience with system design and software architecture are in a much better place to write appropriate tests while still maintaining understandable source code, but if I had my choice between an over complicated codebase with 80% test coverage, or a more simple codebase with 0% test coverage, I would choose the simple codebase every time.

"There’s something really sloppy about this ‘fail fast’ culture in that it encourages throwing a bunch of pasta at the wall without thinking much... in part due to an over- confidence in the level of risk mitigation that unit tests are achieving."

In modern development shops that do a lot of TDD, the tests are relied on way too much. Testing of any sort is not a silver bullet. But you find people, even in pretty big, mainstream development shops of large internet properties, relying almost solely on this. Then they pass their 'finished work' over to operations to be deployed, and when something breaks because there was not a test that counted how many file descriptors were used, the developers are always so quick to say 'well, all the tests pass, so its an operations problem now'.

"However, many supposedly agile nerds put processes and JUnit ahead of individuals and interactions."

In the article he comments on how someone told him that debugging isn't what you do in front of a debugger (obviously debatable) but that its also when you're staring at your ceiling, or discussing the inner workings of a program or algorithm with a counterpart. This is so key, and this is why pair programming is often helpful if you get into a good rhythm with someone. Thinking intrinsically about how software works is the take away here, and all too often we see people rely on tests as a silver bullet and the end result can be code that is over complicated, over confident, and when deployed is an operational nightmare. These sort of things often have a giant net loss in revenue, due to the net loss in a teams velocity to ultimately produce code that works rapidly. When developers lean on tests less (but still employ them where it counts) you'll find easier to maintain code, written by people that will step up to the plate and be responsible for that code.

Obviously there are exceptions to this, and there are shops that have the right balance, and maintain high quality understandable code, while maintaining high velocity. Personally, in over 10 years of being in this industry, almost all examples of this that I have witnessed are open source projects that are peer reviewed.



>> If you find your testers splitting up functions to support the testing process, you’re destroying your system architecture and code comprehension along with it. Test at a coarser level of granularity.

The author has horrible reasoning. Splitting up large or complicated functions is almost always a good thing.

> if I had my choice between an over complicated codebase with 80% test coverage, or a more simple codebase with 0% test coverage, I would choose the simple codebase every time.

This is a false choice. I prefer a simple codebase with 80% coverage. The notion that highly tested code must be complex is simply not true.


The author's reasoning is fine.

"If you find your testers splitting up functions to support the testing process"; he's condemning splitting the function for that PARTICULAR rationale; he's not universally condemning splitting the function.

Of course it's a false choice. He's not saying you can't have both. He's using it as an illustrative example that if you're making the code more complex just to make it more easily testable (see prior point), then you're choosing the wrong thing to do.


One thing to consider here though is that often times when you realize splitting up a function will make it easier to test, it's because your implementation sucks and is doing too much and too tightly coupled. I mean realistically how would splitting up a function make testing it easier unless the function is already complex and performing multiple tasks?

You can certainly argue that some of the clean code folks do a lot of needless abstraction that makes it harder to work on code, and I think that's true at times. But at the same time, a 200 line method doing 19 different things is also quite hard to understand and modify, and the reason testers want to split that method up is because it's really hard to understand and has too many possible outcomes.

I don't like to overly abstract things and I try to strike a balance here, but I can say without a doubt that I've never found it harder to understand and work on a single class with 20 methods that each do one thing (with descriptive method names) than I have a method with 200 lines of code doing the same 20 things. And the former is much easier to test as well.


The idea that splitting up functions makes code more complex is ridiculous. The author claims splitting up functions is "destroying your system architecture". He's trying to claim the exact opposite of what usually happens.

If the code needs to be split up to support testing then its likely that the code should be split up to support other development. Splitting up large functions generally makes software better. Whether that splitting is done as part of normal refactoring or is motivated by a test suite seems irrelevant. Saying that small methods lead to complex code is insane.


"likely". "generally". I.e., not always.

If your only motivation is "it makes it easier to write tests", and there is no other gain, it falls into the remaining case that you even allow for. You're now splitting functions that don't make sense to be split, solely for the sake of making testing easier. And that is bad. A lovely discrete chunk of abstraction is being split across two functions, that you would never call separately, solely to aid testing. And that is bad. That is all this article is asserting with the statements you quote.


No where does the article acknowledge that method decomposition is a valid software practice. It's pretty clear he considers splitting up of functions to be bad regardless of the motivation. Like others have said - if a large function is too complex to test then the codebase is probably improved by splitting it up. That is a benefit of testing and not a downside.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: