A few things many coders don't worry about enough when it comes to testing:
1: Most test code is of lower quality than production code.
2: Tests often contain assumptions and those assumptions are usually not checked against any external reference. Whatever is assumed within a test is de facto law. Consider mocks, what ensures that mocks have the same behavior as the objects or services they're mocking? Mostly it's just an untested assumption.
3: By far one of the biggest class of errors in coding is omission. And there tests barely protect you at all. If you forget to implement a particular feature/aspect of code chances are that you're going to forget to implement a test for it as well. The way to solve these problems is with thorough code review and integration/beta testing.
Ultimately you get into a "who tests the testers?" problem. Which, most of the time, is answered with the resounding noise of crickets. Tests need code review. Tests need owners. Tests need to be challenged. Tests need justifications. A lot of the critical rigor surrounding testing is eroded by common practices which encourage a distinct lack of rigor around test writing (TDD I'm looking at you). Tests aren't magic they're just code, they'll tell you what you ask but if you're not asking the right questions the result is just GIGO.
Too many devs think that unit tests are cruise control for quality. They're not. Doing testing right should be just as rigorous and just as difficult as implementing features.
Turtles all the way down, right? The point being that ultimately you need to have processes other than automated tests to ensure code, product, and test quality. Otherwise you're just shifting the problem around. Bad tests can be just as hazardous as bad code, if not more so, since they can easily waste lots of development resources which could have been used more productively.
1: Most test code is of lower quality than production code.
2: Tests often contain assumptions and those assumptions are usually not checked against any external reference. Whatever is assumed within a test is de facto law. Consider mocks, what ensures that mocks have the same behavior as the objects or services they're mocking? Mostly it's just an untested assumption.
3: By far one of the biggest class of errors in coding is omission. And there tests barely protect you at all. If you forget to implement a particular feature/aspect of code chances are that you're going to forget to implement a test for it as well. The way to solve these problems is with thorough code review and integration/beta testing.
Ultimately you get into a "who tests the testers?" problem. Which, most of the time, is answered with the resounding noise of crickets. Tests need code review. Tests need owners. Tests need to be challenged. Tests need justifications. A lot of the critical rigor surrounding testing is eroded by common practices which encourage a distinct lack of rigor around test writing (TDD I'm looking at you). Tests aren't magic they're just code, they'll tell you what you ask but if you're not asking the right questions the result is just GIGO.
Too many devs think that unit tests are cruise control for quality. They're not. Doing testing right should be just as rigorous and just as difficult as implementing features.