HN2new | past | comments | ask | show | jobs | submitlogin

A lot of the time when doing math, you literally don't want to know what a particular symbol stands for - you just want to manipulate the symbols abstractly. Too much interpretation can interfere with the process of pattern recognition that is essential for doing mathematics well. You also see this in programming whenever someone writes

    public interface List<T> { ... }
in Java, or when you write

    Tree a = Empty | Branch a (Tree a) (Tree a)
in Haskell. It doesn't matter what 'T' and 'a' are, so we use short, one-character representations for them. The fact that you can write Bayes rule as P(A|B) = P(A)P(B|A)/P(B) (where I've even used a zero-character representation for the background information!) expresses the fact that A and B can be arbitrary events, and don't need to have any connection to hypotheses, models or data. It just happens one of the applications of Bayes rule is in fitting scientific hypotheses to data.

This question at math.stackexchange.com goes into a little more detail: http://math.stackexchange.com/questions/24241/why-do-mathema...



Even in your example, multi-character names are used for Branch, Empty, Tree, and List. And those are much more helpful than single-character names would be.

Plus, the math tradition of one-character variable names means that they've had to adopt several different alphabets just to get enough identifier uniqueness (greek, hebrew, etc., plus specialized symbols like the real, natural, and integer number set symbols). Which makes all that stuff a pain to type. And even then, there are still identifier collisions where different sub-disciplines have different conventions for the meaning of a particular character.

It's also annoying because single-character names are impossible to google for.


We would use multi-character names for Branch, Empty and Tree because it matters what those things represent. It would be thoroughly confusing if we instead wrote

    t a = e | b a (t a) (t a)
However, we don't care what the 'a' represents. It's just a placeholder for an arbitrary piece of information. If we had to write

    Tree someData = Empty | Branch someData (Tree someData) (Tree someData)
then we have just introduced a lot of unnecessary line noise.

One difference between programming and mathematics is that programming is mostly interpreted in context, when it matters that this double represents elapsed time, and this double represents dollars in my checking account. Mathematics, on the other hand, is mostly interpreted out of context. I don't care what a represents, all I care about is that it enjoys the relationship ab = ba with some other arbitrary object b.

If the mantra of programming is "names are important" then the mantra of mathematics might be "names aren't important".


Sure, your original example with single-letter type variables makes sense to me, since those variables could represent anything. I never meant to object to those. I just wanted to point out the fact that your example also included multi-letter names, while mathematics generally does not.

So if you really don't care what a variable represents, then I'd agree that a single-letter name is fine. Given that math is almost universally done with single-letter variable names, are you suggesting that in math you almost never care what a variable represents? This wikipedia article makes me think otherwise; clearly, variables often have a fairly specific meaning.

http://en.wikipedia.org/wiki/Greek_letters_used_in_mathemati...


Ok, this is a good reason, though I'm don't think it covers all the use cases, in particular on the borderline between math and physics. But yes, I can see how in programming, knowing what the variable really represents is usually helpful, while in math might usually be not.

Thanks for the link, there are some good thoughts there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: