Unfortunately, 100% testing is only effective if you can detect 100% of the errors generated. The only bug I found in sqlite was an off by one in the btree code that was mostly harmless, unless your memory allocator was particularly fussy.
100% testing is effective, just not 100% effective. And 100% testing is strictly better than 99% testing is strictly better than 50% testing (when considering only its bug detecting capabilities).
I'm glad you appended "when considering only its bug detecting capabilities". Something like SQLite, whose functionality is largely static, definitely benefits from the additional test coverage, but the majority of code out there--code that's constantly evolving, code to which functionality is frequently added and removed--is too often prematurely cast in stone by overzealous testers, making maintenance and evolution significantly more difficult than it ought to be. In a lot of cases, too much test coverage actually reduces the value of the code.
There is a strong case that more tests allow you to make changes to the code with some confidence that there won't be unintended side effects. Now maybe SQLite's level isn't appropriate in most cases but that doesn't mean high levels of code coverage are bad. It can also give confidence to upgrade underlying frameworks or libraries
I only found the bug, never quite understood it, and after seeing how disturbing the fix was decided some things were best left unlearned. http://www2.sqlite.org/cgi/src/fdiff?v1=fa113d624d38bcb36700...
That said, sqlite is one of the most reliable and better designed libraries I've used. Software is hard.