Gabriel is also the author of the popular Pipes Haskell library, and nice person in general. Both of these make me heartily recommend his other blog posts about various Haskell topics, which often portray advanced features remarkably well.
Agreed. I sent him an email asking for help on proving some category theory laws for some datatypes, and he wrote back a several page email answering everything in detail.
The idea that this composability is unique to Haskell is false. Java's streams are an example of exactly the same thing in another, non-functional, language. The Gang of Four "decorator", "composite", and "chain of responsibility" patterns are further general examples.
It is true, perhaps, that in Haskell, programmers reach for homogeneously-typed composition more readily than programmers in other languages. Good for them! But it's either arrogance or ignorance to assert that this is a special Haskell thing.
Furthermore, i am dubious that this really is a good strategy for building large programs. The idea that you can combine lots of parts of some type to build a bigger part of the same type is extremely appealing. But in my experience, the bigger part often has slightly different properties, behaviours, or uses which warrant a different type with more features. Unless you want to impose those features on the smaller types too.
For example, consider a batch application which processes files through a number of stages (i realise it's the 21st century, but apparently we still need to do this). There is clearly a type for a stage, with values for things like uncompressing, validating, renaming, parsing, etc. There is probably going to be a type for a chain of stages, with values for various uses of the application. A chain looks like it should have the same type as a stage - ultimately, both take a file in, and spit a file out.
But then, it turns out that we want to move the file through a sequence of directories, one for each stage, as we process it (the operations guys are really keen on this). Furthermore, we need to be able to report on what files are currently at which stage. So, a stage knows which directory it owns - presumably, it has a property of type directory for that. But a chain owns all the directories of its component steps, so it owns several directories - it's going to need a property of type collection of directory. So what do you do? Report a single directory for the chain, and somehow expose the rest through a backdoor? No, that's a kludge. Have every step report a collection of directories, which will mostly be single-element collections? No, that's weak, because the type of a step no longer fully describes its the constraints on it. Use a higher-kinded type parameter, so the chain can have a collection of directories, while the steps have a single one? Mad wicked, but racks up the reader's cognitive load. Use different types for steps and chains? Well, actually, since that's simple and doesn't have any practical drawbacks, probably yes.
> The idea that this composability is unique to Haskell is false.
Haskell is one of the rare language/community where composability is the default. Even purely technically, few languages do composability as well as Haskell.
The Gang of Four pattern you mention most likely work around weaknesses of Java. As such, they demonstrate the shortcomings of Java, not it's strengths. If you need a pattern that badly, make it a language feature, dammit. (Even without Lisp-style macros, source-to-source transformation is quite viable.)
> But in my experience, the bigger part often has slightly different properties, behaviours, or uses which warrant a different type with more features.
I'd say most of the time, you just failed to capture the commonalities. More specifically, you seek generality through exhaustiveness, when you should use genericness instead.
If you stumble upon something that looks like a monoid, except for such and such detail, then the details probably need to be either removed (they're could be design bug), or integrated into a generic bucket or something.
> For example, […]
Those specs suck. Whoever asked you this don't understand their own needs, or are struggling with a legacy behemoth that should eventually be replaced. (EDIT: or, as mightybyte suggested, you just approached the problem the wrong way. Which strengthen my point about failing to see the commonalities.)
> I'd say most of the time, you just failed to capture the commonalities. More specifically, you seek generality through exhaustiveness, when you should use genericness instead.
THIS is one of the most important things I've ever learned from programming Haskell[0]. The way to make things generic is to whittle away all of the sharp edges and implement them as compositions atop base patterns. Those base patterns arise out of polymorphism, not exhaustiveness.
You don't make lists generic by realizing that you need a list type to have all the properties of a string, a tree, a vector... You make it generic by realizing that much of its structure arises by forgetting what's inside.
[0] Which is not to say you can't learn it elsewhere, or that you cannot do it elsewhere. Instead, Haskell has the facility, the philosophy, and the community to do it really effectively all the time and so if you read Haskell code you'll be floored by some of the great examples possible there.
Fun fact: I have known this intuitively since forever, but only recently realized that when I say "generic", most programmers I know tend to hear "exhaustive", and recoil in horror; while I really meant "generic".
Now, I also know why I like parametric polymorphism, and dislike subclass polymorphism: the first is generic, while the second is exhaustive.
Imagine you want to process a list of foobarbaz, where elements can be foos, bars, or bazes.
Exhaustion is when you write foo specific code, bar specific code, and baz specific code. Then your list-processing facility is general because it handles all the cases. A typical way to do that in practice is to use subtype polymorphism, with the class inheritance mechanism: have a foobarbaz interface, a foo class that implements it, a bar class that implements it, and a baz class that implements it. Each with their own code.
Genericness is when you ignore the foobarbaz specifics altogether: your code doesn't even mention the types. A typical way to do that in practice is to use parametric polymorphism (generics in Java, templates in C++). See C++'s Algorithm library for an example, or the standard Prelude from Haskell, or the OCaml standard library.
Now I know why it didn't rely on first class functions.
It wasn't written in Smaltalk. It was written in the intersection of Smaltalk and C++. Sounds like Java, even though it didn't exit yet. Come to think of it, I wonder if they didn't have hints about this upcoming language from Sun. Java came out in 1995, but was in the works since 1991…
So my point stands.
If it were really written in Smaltalk, it wouldn't have needed insanities such as the Visitor pattern to work around the lack of closures. It would have used Smaltalk Blocks directly.
The book really was written with Smalltalk in mind with the C++ examples added for marketing reasons. If you want to go argue with Ralph Johnson (aka the god of Smalltalk) about it, be my guest.
Actually, I'm starting to want to read this book, if only to see how far down the rabbit hole goes. Depending on what I find, I may want to demolish it. I don't care if the authors are gods, this glimpse of their work does not look good.
The visitor pattern is about doing different things to objects based on their class. What would make it redundant would be multi-methods, or pattern matching, not blocks. Smalltalk doesn't have a good alternative to the visitor pattern.
Oh, my. This is even worse than I had thought. The motivating section makes me want to grab my closure hammer, especially the "reuse of iteration code" motivation. I don't know how I would handle it of the top of my head, but the way the Wikipedia article present it feels wrong.
The Java example looks more explicit. Oh, crap. This is Bad, Wrong, Horrible design. Sure, now the compiler can help you not forget cases, but you're still on the wrong side of the Expression Problem: for every single subtype you add, you still need to add a case to each of your visitors.
Is this the kind of price one has to pay for class hierarchies?
Why are you talking about Java? You obviously don't know much about it. I don't even say this to defend Java, because I don't like it. But you're apparently just repeating secondhand information about it.
For each class, you basically need to write an "accept" method to call the visitor. Fine for now. But then, the visitor specializes depending on the type of the object! Couldn't we just inline the appropriate visit method directly at the call site? It would get rid of all the boilerplate introduced by the visitor pattern, without duplicating more code. It would still suck[1], but it would suck less.
If you read the "Details" section of that very Wikipedia article, you'll see why you can't just inline the accept method. You need both visit and accept in order to simulate double dispatch. accept performs one dispatch, visit performs the second. I suspect you could further generalize the pattern to k dispatches for some arbitrary but fixed value of k, though it would be very tedious.
Some algorithms, like collision detection for example, are readily described with double dispatch. So in a language that only offers single dispatch, like C++ or Java for example, the visitor pattern actually leads to more elegant code for such algorithms.
I'm not saying I particularly like this way of doing things (I'd rather the language had direct support for multiple dispatch, really), but it's certainly not a dumb way of doing things as you seem to think. In fact it's rather clever, and in some cases the best option.
> If you read the "Details" section […], you'll see why you can't just inline the accept method.
Oops.
Still, it makes me feel very uneasy: the accept methods dispatch over behaviours, and the visit methods dispatch over the object. This means highly non-local code.
Collision detection looks like a good example. Okay. Now even with multiple dispatch, I can't suffer the idea of having up to n² methods, for n different classes to dispatch over. In the case of collision detection, I would try very hard to have no more than 3 or 4 types of collision volumes. Just one if I can help it.
Personally, I'm not sure I like multiple dispatch at all. When you need it, you expose yourself to combinatorial explosion. It's only polynomial, but it still scales poorly.
For something like collision detection, you're always going to have n² cases to handle, even with something like pattern matching. The question then becomes whether you have n² methods or n² branches. There's a "combinatorial explosion" regardless; you just can't avoid it for some things. Then it's just a question of how you structure the code.
Versus the pattern-matching approach (abusing CL's case expression for clarity):
(defun collide? (a b)
(case (list (type-of a) (type-of b))
((bus person) ...)
((person bus) ...)
((bus bus) ...)
((person person) ...)))
Beside potential performance implications of implementing one way or the other, you also have an open-closed conundrum. Multimethods leave the set of behaviors open, while the pattern-matching leaves it closed. That is, if a library wanted to define a new kind of collidable entity, it's trivial to add with a new library method. Not so easy with the pattern-matching approach. For some applications, this doesn't matter; for some it does.
You can avoid the n² problem if every object uses the same type of collision mesh (or bounding box, or bitmap, or whatever). Now the problem is to get from the object to the collision mesh, and that's O(n).
The bigger question is, is it practical at all for everyone to use the same type of collision object?
The answer to the bigger question is: not always. But let's assume that in our case, the answer is yes, and we can use a bounding box for everything. Let's also assume that when a collision is detected between two objects, the results of that collision vary depending on what those objects are. How do we handle that? Well, we're back at the n² problem again.
I once wrote a version of Asteroids in C++ in which every object had a particular bounding shape that was used for the detection. For the logic, I still had n² cases to determine the result of that collision. Looking back, I contend that it would have been cleaner to use the visitor pattern, since all the logic in my game was contained in various game objects except for the collision handling logic. [1]
It's good to be skeptical of design patterns. I've worked at several companies where liberal application of patterns really mucked up the code base. But in some cases, they really can be the best solution -- when used conservatively and carefully.
[1]: Looking back I also would have used an entity system rather than "classic" OOP, but that would not have solved the n² case dilemma.
It's just that you made a comment about design patterns being tailored for Java, then apparently later actually investigated the visitors example. You should do those things in the opposite order.
So why should I waste my time with yet another OO book? Plus, I know it from reputation, which told me closures greatly simplify or even eliminates many patterns in this book. Languages with a GC and no closures should not exist, and so should books who teach how to work around their absence.
But I have just learned some interesting things about this book right now. Like, it was written in Smaltalk primarily, and merely translated to C++. Before Java came out, no less. So maybe its use isn't limited to class based, single dispatch, statically typed languages that don't have closures nor generics (like the first versions of Java).
But if the book is limited to such braindead languages, then patterns are just a symptom of bad language design, and I'd like to demonstrate that.
But first, I'll read the book. At least, I will know my enemy. I'll probably learn a thing or two along the way. And who knows, maybe I will finally "get OO"?
for every single subtype you add, you still need to add a case to each of your visitors.
Why is this a problem? If you have N visitors, that's N different behaviors for every type, and you'll have to define those N behaviors for a new type somewhere. In this case, you do it on each Visitor (since each Visitor defines a behavior). Without using the Visitor pattern, you might put the behaviors somewhere else, but you would put them somewhere.
With M objects and N behaviours, it's okay to write M plus N pieces of code. With the Visitor pattern as I saw described, I have to write M times N pieces of code. It doesn't scale.
The obvious way out of this is intermediate representation. Typically, you would have the objects present themselves in the same way to any incoming behaviour. Now the behaviours don't have to know about each and every object. They just need to know about the intermediate representation. And that pattern is trivially solved with first class functions and parametric polymorphism.
With M objects and N behaviours, it's okay to write M plus N pieces of code. With the Visitor pattern as I saw described, I have to write M times N pieces of code.
It's actually the same amount of code in either case. With N different behaviors and M classes or types, you'll either implement the N behaviors as N methods on a class (and you do this for M classes - still M * N) or you'll have N visitors and add one method to each visitor to define the behaviors for a type (and you'll do this for M types as well - still M * N).
Consider the 2D-CAD example in the Wikipedia article: there are some M shapes and N different file formats to implement. If you add a new shape, you need define how it is represented in all N file formats. You can't get away with anything less than O(M * N) code. All the Visitor pattern does is change where you put it.
Typically, you would have the objects present themselves in the same way to any incoming behaviour. Now the behaviours don't have to know about each and every object.
If this is possible (which is very problem-dependent), then it's just as easily solved with inheritance. Since each subtype would have the same external interface (presenting themselves identically to incoming behaviors), the base class defines the external interface and subclasses override base class methods to define the implementation.
I was just saying M+N ≠ M×N, and that solutions that forces M×N are bogus.
Agreed. I was clarifying that converting your code to use visitors does not force an order of magnitude increase in the amount of code. If you start out with O(M+N) without visitors, you'll still have O(M+N) code after converting to visitors. Although, visitors are generally better appreciated when managing O(M*N) code than when managing O(M+N) code.
Lose the sarcasm, please. If you still believe I'm wrong despite my documented arguments, I suggest you put forth documented counter-arguments. It will have two benefits:
1: You may convince me. I'm serious. I do not anticipate any strong counter arguments, but if they come, I will listen. (I did listen to your first argument. It troubled me, really, I was certain the book was written in Java. So I checked.)
2: You may show the other readers the error of my ways. Since my arguments are documented and relatively articulate, they will likely sway anyone who isn't committed to a particular position. You wouldn't want me to put false ideas in their heads.
> Those specs suck. Whoever asked you this don't understand their own needs, or are struggling with a legacy behemoth that should eventually be replaced.
That may or may not be true, but presumably you're not suggesting that Haskell is unsuitable for writing software for users whose specs suck, or don't understand their needs?
No language is suitable for writing badly specked software. And whatever language you are using, you'll get into troubles if not technical people start to make too many requirements about the way you organize your code (but don't despair, the first such requirement will be to not use Haskell).
"Badly specced" can be a number of things. Some domains have naturally well specified domains, others do not. The business rules of a company, a countries' tax code or building code usually is not an elegant business domain. It's usually a horrible mess of incremental decisions accumulated over decades. Often even logically incomplete or inconsistent.
Of course no implementation can be more elegant than the underlying business logic, the question is: how does the language help me write an implementation that is as nice as possible when the business domain is horrible?
I think Haskell code often looks beautiful but I attribute that not only to it being functional/pure/compact/elegant but also to the code usually being examples of "clean" domains (mathematics, etc.) being implemented in a way that is suited for demonstration, and the fact that it is usually written by excellent developers (who are the ones who tend to preach FP, for good reason).
What I'd like to see are some large-ish real world examples of FP architecture for horrible business problems, not nice ones.
The law is the prime example of horribly messy domains. Thankfully, the law is also human-made, which means it can be changed (at least in principle). We have wasted enough human-millenia already.
This doesn't really excuse the fact that Haskell is a harsh mistress that demands elegance. Many production languages are much more pragmatic about the quality of the spec, the quality of the developer, time constraints, and so on.
This doesn't really excuse the fact that Haskell is a harsh mistress that demands elegance.
I don't know how true that remains for expert Haskell programmers, but I agree that it's the way most introductory to intermediate material comes across.
In academia, you are often looking for the most elegant tool in clean situations.
In industry, you are often looking for the least clumsy tool in messy situations.
Perhaps Haskell is a good choice in one context but a poor choice in the other, at least for those first learning it.
> The idea that this composability is unique to Haskell is false. Java's streams are an example of exactly the same thing in another, non-functional, language. The Gang of Four "decorator", "composite", and "chain of responsibility" patterns are further general examples.
Certainly the concept of composability isn't unique to the Haskell community, no one's implying that it is. In fact, part of the point is that even these specific abstractions aren't unique to Haskell, but available anywhere, as they've been borrowed from mathematics. Composability and consistency are things that the mathematics community does exceedingly well, and the Haskell community has chosen to learn from them, while most other language communities do these things poorly, as they've chosen to reinvent the wheel. The GoF Design Patterns, in particular, are a rather verbose reinventing-the-wheel of well-known concepts like partial application, but with inconsistent vocabulary and implemented with regards to the limitations of some specific language's semantics, and do a poor job isolating the minimal set of constraints necessary to get the intended emergent properties. Calling a person or community arrogant for trying to highlight the utility of learning these lessons isn't helpful to anyone.
Or you could just make a sequence of several stages itself be a stage, collapsing the preponderance of abstractions like this post talks about. Then, instead of having a mutable property holding a directory, you might just use a function that takes a directory and returns a stage. You could have another function that takes a file and returns a stage. Guess what...this is starting to sound an awful lot like a monad.
Departing a little from the OP's topic, I think it's interesting that your outlined design relies significantly on mutable state ("a property of type directory"). I think one of the biggest advantages of Haskell is its purity, which discourages you from thinking about things in terms of mutable state. This has far ranging positive effects. It makes APIs much easier to understand and use. That in turn results in Haskell programs having MUCH more code reuse than other languages.
> I think it's interesting that your outlined design relies significantly on mutable state ("a property of type directory").
The design does not rely on mutable state. The property doesn't have to be mutable. It doesn't even have to be a field - in an object-oriented language, it could be a method that collects the directories from the child steps. You appear to be dressing me in a strawman's outfit.
> Then, instead of having a mutable property holding a directory, you might just use a function that takes a directory and returns a stage. You could have another function that takes a file and returns a stage.
What? Could you expand on this? The thing i'd like to do would be to take a chain and make a list of the files in its various directories. How would i use your functions to do this?
So stage is some transformation of the file and a working directory
data Stage = Stage (File -> File) Directory
If we want to construct chains of Stages where we can still inspect any individual Stage, a monoidal composition wont cut it for the reason you stated. But this raises the question. What is the monoidal composition of Stages?
Well first of all, we must do something about the parameter Directory because the second type parameter must be a monoid too (the transformation function all ready is). But lets assume that it is, and see what we get.
mconcat (Stage f dir1) (Stage g dir2) = Stage (g . f) (mconcat dir1 dir2)
mempty = Stage id (mempty :: Directory)
So so if we can create a new directory from two directories, we can compose two Stages to a new one that uses the newly created working directory. Or if the second parameter of Stage is a list of directories, the new Stage uses two working directories. They aren't the idea of chains that you had in mind, but they still sound useful, at least to me.
It seems that each stage will need to know about at least two directories, an input directory and an output directory.
If it was implemented with a chain type and a stage type, the "run it" command would presumably have to read the directory for the second stage, then pass that into the first stage as an argument when processing the chain.
Alternativly, each stage could have a directory property and a next stage property. Then when the stage runs, it could ask its next stage for the output directory. There wouldn't be a chain type as such, only links in the chain of stages. The "status report" query would return a list of statuses by prepending its status to the "status report" of the next stage. Joining two chains would compose by joining the last stage of one with the first stage of the other. To run the chain, there would only need to be a reference to the first stage.
The linked-list of stages might have too much flexibility, for example there would be nothing to stop you having a particular type of stage which branches out to multiple other stages, and there may need to be safeguards against loops in the chain, say, by making the next-stage property immutable - though that would force the chain the be built up in reverse, which may be inconvenient.
Come to think of it, Maven maps plugin goals to execution phases in a "monoidal" way. Each plugin has a number of goals, and each goal can be mapped to one or more phases: https://maven.apache.org/guides/introduction/introduction-to... In a way, each plugin is a "mini-lifecycle" in itself.
When more than more goal is mapped to a phase, the order of execution corresponds to the order of the plugins in the POM (which means that "plugin aggregation" is associative).
It seems, as DougBTX points out, that in your example each stage has one input directory and one output directory. This is true, also, of the chain itself (the chain's input directory is also the input directory of the first stage, while the chain's output directory is the output directory of the last stage.)
So, it doesn't seem like there really needs to be a difference in structure at all.
I think the general principle here is combinator based approaches to program structure. Combinators are just as easy to use in object oriented languages as they are in functional languages especially languages that allow some form of operator overloading. The following is all valid ruby code
f > g
f | g
(f | g) >> lambda { ... }
(f > g) >> lambda { ... }
Taking these ideas a little bit further you end up with mini DSLs purpose built for expressing things very concisely. In fact if you squint a little bit you could imagine the above code expressing some form of BNF grammar as Ruby code and there are several parser combinator libraries out there that do exactly that.
I find it a little annoying that the general principles get lost behind smoke and mirrors like monoids and monads when things are in fact much more accessible and do not require anything other than some basic understanding of abstract algebra. It's the algebraic and not the fancy static types approach that has paid the most dividends when it comes to how I structure my code for readability and maintainability.
The important part of what I guess you could call algebraically structured combinators is that they must follow certain laws. A type system can certainly help enforce at least a subset of those laws, but you're right that you don't have to have one in order to make use of the structures.
Where you lose me is in saying both that a basic understanding of abstract algebra is helpful and that monoids are part of the "smoke and mirrors"; I'd consider monoids a part of a basic understanding of abstract algebra.
Poor choice of words on my part. Yes, monoids are indeed a very basic kind of algebraic structure but the tutorials for these structures are always in a language like haskell where the actual simplicity of the concepts gets lost behind the type system. Type systems in my mind are logical structures and even though there is a very close relationship between algebra and logic one doesn't always have to tie the algebraic structures to some kind of type system to make sense of them.
Nods. The nice part about the type system is that it can automatically reject (some) misuses of the structures. I've found that helpful while learning some of these things as it can let me know that I don't actually understand something that I thought I had. I agree that emphasizing the typed aspect can do an injustice to the other algebraic properties.
> This is one reason why you should learn Haskell: you learn to how to build flat architectures.
I suspect that most people learn the advantages of flat architectures with experience using any programming language. For me, I started to "get it" with Javascript & Ruby. So Haskell hardly has a monopoly on this.
I understand that Haskell has a different take on this. It seems the author has found a pattern in Haskell that he frequently uses.
Ideally, I can use patterns in multiple languages, so my experience can seamlessly transfer.
This often means there's a lowest common denominator to a particular pattern. I'm afraid that if I learn a pattern unique to Haskell, that it is not applicable to any other language. I suspect that there are underlying & common principles that are expressed differently with Haskell.
These are not patterns unique to Haskell. In fact, it's just the opposite: these are mathematical abstractions, and so they can be used anywhere that you have use of function application, in stark contrast to the alternatives. By using these patterns instead of Ruby or Javascript-specific, Application-specific, or Object Oriented-specific patterns, you can be more confident that your abstractions will be consistent with eachother, and as a result, more composable.
Case in point: I'm currently developing in Javascript, but I use these patterns (Monad, Applicative Functor, etc) to manage things like asynchronous code and error handling, because they're far more flexible/composable and reliable (in terms of not having side effects) than the other offerings on hand, and I know that will always be the case, because these abstractions were designed with the constraint of composability in mind: in fact, they are, by definition, the minimum set of constraints required to get these emergent properties. The benefit that Haskell itself offers is that it can check at compile time that these constraints hold true.
> I'm currently developing in Javascript, but I use these patterns (Monad, Applicative Functor, etc) to manage things like asynchronous code and error handling
Thank you for this comment. I'm interested in knowing which these fundamental patterns that Haskell emphasizes.
Unfortunately, time is limited & currently, I'm not able to allocate enough to learn & use Haskell in a meaningful project. I recognize that there are some important abstractions & I'd love to learn all of the useful abstractions that are applicable to other languages & environments.
I think his argument is that these patterns are the easy thing to do in Haskell, not that they are necessarily unique.
The Monoid pattern is universal, but most languages don't give you tools to talk about it as an entity.
It is harder to enforce something like implementing a Monoid Ruby or JavaScript but you can always implement mappend, mempty and use "duck typing". Haskell's advantage in encoding these patterns is static checking, and type classes.
> Ideally, I can use patterns in multiple languages, so my experience can seamlessly transfer.
In many cases you are not guaranteed via the type system that one individual's implementation of the Observer pattern (for example) is not full of bugs, contains side effects, or closely adheres to the pattern's spec.
I believe Gabriel is hinting at the theoretical underpinnings of what it means to be composable, as opposed to just talking about how things can be mashed together (via pattern or interfaces or contracts or testing or whatever).
> In many cases you are not guaranteed via the type system that one individual's implementation of the Observer pattern (for example) is not full of bugs, contains side effects, or closely adheres to the pattern's spec.
Nor are you in Haskell, of course. Nothing about that language prevents code being full of bugs, nor failing to adhere to a specification, nor, given the existence of unsafePerformIO, having side effects.
Certainly, a competent Haskell programmer will not write bugs, will adhere to the specification, and will not introduce hidden side-effects. But then, nor will a competent programmer in any other language.
What is the barrier to "stupidity" in a language. In Haskell there is a deliberate, powerful barrier in the form of a compiler which will yell at you repeatedly until you stop violating the constraints encoded in the types. You either learn to work along the lines laid out by the types or you cheat.
To cheat you import a module called System.IO.Unsafe or Unsafe.Coerce and get sudden access to a magic hammer. If you use it to subvert the type system in a dramatic way then you'll immediately get so many massive runtime errors that you'll be forced to rescind. If you use it on a project that enforces -XSafe then the compiler will still reject you.
To get past both of those barriers you probably have to have a really, really great desire to break the system or a deep understanding of why and when it's valuable to use these unsafe types.
Again, nothing is actually stopping a terrifically bad programmer from breaking your statics, but there are a lot of things making their life much harder in the process.
The wager is that you can detect and avoid people who are willing to cross higher barriers much more easily.
A particular side-effect that an incompetent Haskeller can (and will, multiple times) introduce is wiriting code that gives correct results and is seemingly correct, but will suddenly run out of memory on larger (but still reasonable-sized) data due to unexpected laziness.
In practice this doesn't happen very often. Laziness is unique to Haskell, but the phenomenon that bad programmers can introduce subtle bugs is most definitely not.
You can encode a specification within Haskell's type system and get guarantees that are very difficult in other languages. Gabriel's pipes library is a great example of this (see a Pipe's associativity laws).
Oh, sure. Haskell's type system is famously powerful. But the simple fact is that it does not preclude bugs. It is possible to have bugs in Haskell programs!
Nobody who knows what they're talking about says that Haskells types (or even Idris/Agda/Coq's types) "preclude bugs" but instead that they "eliminate some classes of bugs so long as you don't behave really pathologically".
That's a much weaker statement, but still a valuable property.
I did not say it was impossible to write Haskell with bugs. I simply stated that you are not guaranteed that a particular implementation of a design pattern adheres to the patterns semantics, therefore introducing a bug. With good usage of Haskell's type system, you can wholly avoid such bugs, and a great example of this, as I stated, is the associativity laws within the pipes library.
No language "precludes bugs". You can get bugs because you didn't leverage Haskell's type system ("let's use String for everything"), logic bugs, or simply bugs caused by a faulty understanding of the problem. What you're not going to get is NPEs, or unwanted side-effects. Since it takes very little work in Haskell to ensure statically that two numbers which represent two different notions (say, mass and velocity) have different types, it's going to be lot easier to get a robust program.
But no, Haskell is not a silver bullet, and comes with a number of significant drawbacks.
Obviously it's a difference of degree. But Haskell offers more assurances in some areas than other languages. There are escape hatches, but that's exactly what they are - things that any reasonable programmer shouldn't reach for unless they really have to. You can give a function the value 'undefined' and the compiler won't care. But that is really only meant as "not implemented yet", "there are no sensible value to give back for this value", etc, not par for the course like for example the way null pointers are used in some other languages.
But since the world isn't perfect and programs can always have bugs, I guess we should just throw our hands up and proclaim that it doesn't really matter what we use.
Examples:
Scrap your type classes: http://www.haskellforall.com/2012/05/scrap-your-type-classes...
Hello Core: http://www.haskellforall.com/2012/10/hello-core.html
Coding a simple concurrent scheduler yourself: http://www.haskellforall.com/2013/06/from-zero-to-cooperativ...
Tutorial as part of library doc (!): http://hackage.haskell.org/package/pipes-4.1.1/docs/Pipes-Tu...