Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

Type inference is a powerful feature that allows you to just not specify types (except where the program is ambiguous) in most cases.

But type inference only works if there is some non-trivial base type shared by all the types I want to mix. Type inference can't figure out that None and a string are both valid return values for some function; I still have to declare a custom type that mixes them (like a Maybe in Haskell). Similarly for types declared in completely different packages that don't share any base types; they might be perfectly duck-type compatible, but how does the type inference system know that?



If String and None are both valid return types for a function, then you tag that so the inference system knows it's a valid solution -- the inferred type becomes Maybe String. The compiler will even help you get there.

> Similarly for types declared in completely different packages that don't share any base types;

If you know they share functionality, you can declare that they share functionality (any language with type classes). Once you've done that, the type inferencer will generalize the type of the function to the entire class. As a bonus, you automatically factor out shared code and make the program as a whole more modular.

The answer to "how does the type inference system know this?" is usually that you've made those bits explicit. This sounds tedious right up until the first time you write a complicated program, get it to compile, and it just runs correctly the very first time. It gets better when you learn to work with the compiler and start to make judicious use of code holes (undefined/_ in haskell; they allow you to provide an expression of any type so you can test unfinished programs. The compiler can even tell you what type the unwritten code needs to have, and in most cases that actually tells you what the code needs to be).

Lastly, in a language like Haskell, type declarations are almost always optional. They're used as a kind of documentation and an enforced contract for any other programmer that happens to read the code. Most types are inferred -- if there's some ambiguity, the compiler will let you know, and you can fix the code. For example, if your function tried to return String or Nothing, the compiler would complain that String doesn't match Maybe a, try Just <string value> instead. So you replace <string value> with Just <string value>, and now the function compiles. If the requirements later change, then you make the necessary changes the same way you would in a dynamic language, and then follow the chain of type errors to fix issues that have cropped up. So if suddenly you need String or Integer instead, you'd just replace Nothing with the <Integer value> you now need to use. The compiler will complain that Maybe String doesn't match Integer, and so you fix your code to Right <string value> and Left <integer value>. Now the compiler agrees that everything makes sense again, and you continue on.


If String and None are both valid return types for a function, then you tag that so the inference system knows it's a valid solution

In other words, I need to do extra work to tell the type system about the function's return type.

If you know they share functionality, you can declare that they share functionality (any language with type classes)

Same comment here; the language is making me do extra work that I don't have to do in a dynamically typed language. So there's a tradeoff.

This sounds tedious right up until the first time you write a complicated program, get it to compile, and it just runs correctly the very first time.

See, here's the thing: I often write non-trivial programs in Python that run correctly the very first time. So I've had this same experience in a dynamically typed language.

Perhaps a language like Haskell would raise the bar, so to speak, for how complex a program can get and still allow this to happen; but even then, it would only matter if you were writing the particular kind of program that can benefit from the difference.

Lastly, in a language like Haskell, type declarations are almost always optional.

It's the "almost always" that's key here; basically, what the rest of your discussion here shows is that the type declarations end up not being optional in precisely the cases where, in a dynamically typed language, you would just be making changes and running code, not spending extra work on adding type declarations to resolve ambiguities for the compiler. So there's a tradeoff.

As I've said in several comments in this thread, I'm not saying dynamic typing is always better; I'm saying it depends on the kind of application you're writing. There are tradeoffs involved, and they don't always end up favoring static typing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: