HN2new | past | comments | ask | show | jobs | submitlogin

You're not doing Test-Driven Development if you're writing code and then writing tests, though. You must write your test first, and only after you see it failing, write the code to make it pass. You have to stick to this cadence if you want to put TDD in practice.


Agreed... though I guess I gotta ask, what did I write that made you believe that I think writing code before tests is TDD?


I think your comment just made me understand what was nagging me about testing: writing tests is a form of decomposition. The method/function is complex/complicated, and therefore hard to reason about. The unit tests limit the scope of input so that the in particular instances, the method/function becomes more simple to reason about, enough that a test can be written.

Sadly, I feel there are things for which tests can't be written. If a function takes two integers, and adds them together (the typical 2+2=4 therefore n+n=n). Does this work when the integers are near-infinity large (trillions of zeros -- theoretically possible with python)? How would a unit test validate this?

If you wanted to test all combinations possible, you would have to brute-force until the end of the universe, or until the machine runs out of hard drive space (or SAN space), whichever comes first. If you wanted to take a statistically significant sample, you would only have an elevated level of certainty, not an absolute level of certainty.

I think that what the author is pointing at is that like mathematics, which, let's face it, the human brain is much better at than computers, programming is best done in the human brain, and that once the human brain has satisfied itself of the proper of the program, the coding becomes simple.

And the program no longer needs to be decomposed, because it is understood as a whole.


Random property-based testing is useful for that sort of thing, as used in the QuickCheck library (most popular in Haskell/Erlang but there are ports to pretty much every language in existence). One property you could use for your example would be to have a simpler, slower reference implementation that you trust and test that it produces the same output as the one you've written. This isn't a verification but is very effective at snuffing out bugs.

But you are right that testing does not verify the correctness of code (unless you are writing something like a boolean -> boolean function), but often property-based testing is a stepping stone to verification since there is a quicker feedback cycle between write code->find bugs->fix code->repeat than the equivalent write code->fail to prove it works->wonder if its broken or if you just don't know how to prove it.


Suppose you write a method that takes two inputs and returns the sum. What would you do next? You'd probably do something like you just suggested, verify that when you add 2 and 2 here, you get 4. So, save it. This isn't a complete or perfect test. In fact, this particular test is risky, because later, when someone decides that addify should multiply, your tests will still pass. I'm not going to pretend that this solves your problem. But if you had written 5, 2, and 3, it would have protected you from that later scenario. And IMO, it's not the dev's fault who turned addify into a multiplication function - how is he or she supposed to know it broke the code if there aren't any tests?

To me, though, the bigger problem is when you're trying to figure out what your code is supposed to do by writing code, and then shoehorn that process into TDD. Like, write down 42, then make it fail, and then make it pass.


If you wanted to test all combinations possible, you would have to brute-force until the end of the universe

You only need to test for a combination of all the input variables that affect the execution flow. However, to do that properly, you need to know the flow of the methods your methods calls. (E.g. you would need to know that substring throws an exception if the string in shorter than expected.) This kind of information could be captured in some kind of meta-data that could be propagated up the call chain for testing.


But then you still wouldn't know that the function could be used for any two integers, including those in the trillions of zeros.

Also, you would have to anticipate what possible uses it could be used in, and that's impossible to know with certainty, so how could your tests be accurate?


Well, you wouldn't be able to conclude that your method works for any possible input. You'd have a verification for how it is supposed to work for a predetermined set of inputs.

Think about the situation above, where you're dealing with a method to add two integers. So, you test 2, 3, and 5. It passes. Later, you decide that this method should multiply rather than add, so you change it. Your unit test breaks.

In my personal experience, about 95% of the time, I want to keep the modification, and so I need to update the test. But every now and then, I realize that the test is accurate and I have introduced unintended side effects into my code.

That's just me developing for me. The tests are also very important when a new developer is working on the app. If they change something, they need to know if they've broken anything downstream.

It's not failsafe but I do think it's a huge improvement over no tests.


If they've broken anything downstream... Well that would mean that something downstream is using it.

This, to me, would mean that the app is being used as an API. API rules need to be applied (don't change existing versions, etc). That's something OOP is very bad at, I think.

Using the Unix philosophy or not adding to existing programs would also alleviate the need for regression testing, no?


"More often, I'm writing tests and code essentially in parallel with each other (like, write a line of code, write a test, write a line of code, write a test). I suppose I could reverse them."

In order to do TDD, you have to do it the other way around, no backsies. The practice is very dogmatic about this. It's kinda tricky but all you have to do is write what you wish you had.


Huh. I intended that bit as a specific example of a non TDD approach, where you don't write tests first. So I'm not surprised that you identified it as a non-TDD approach, I'm just a little confused about why you thought I was presenting it as an example of a TDD approach.

Guess that wasn't clear. Oh well.


Isn't this still TDD though?

My understanding was that writing tests first is known as Test-First development which is a subset of TDD. TDD allows you some flexibility to write code first as long as the tests 'drive' the development process?


No, you shouldn't write code first at all. As soon as you do, you break TDD. You have to write your test first. The practice is pretty dogmatic about this.

The trick to writing a test first is that you have to write what you wish you had. It's kinda like Composed Method in that way.


Jesus... All these rules..

I'll stick with the smalltalk adage: code a little. Test a little.

No need for dogma.


I like it ;) Might borrow that one




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: