Hi HN! Author here. I made this game because I wanted to see what it was like to make video games with Haskell. It turned out to be a pretty good fit. If you are interested, here's an explanation of the code: http://vimeo.com/109663514
I'm learning Haskell and I find this project very helpful and educating.
There's one thing I'm wondering about, and that's the lack of (unit) tests. I'm coming from the Java world and I'm used to write unit and integration tests for most of the code I write. But I have noticed that in Haskell projects this is not so common.
Why is that? Is it a cultural thing, or is Haskell's strict type system and immutability so helpful that unit tests are not so necessary comparing to for instance Java?
HUnit will be very familiar to you, Quickcheck is definitely more interesting. You specify some invariants and the heuristic algorithm tries to come up with random inputs to break your function. It's potentially less accurate as good unit tests, but it's also a ton less work.
I can just say something like "This function takes an array of integers as input, and I expect an output array of the same size, where all numbers are even, and there are no duplicates" (in math-y form, of course), and quickcheck will try however man random inputs I specify to try to find a case that breaks that invariant.
That being said, there are those in the Haskell community who take Haskell's referential transparency and equational reasoning to a whole new level and don't write tests, but instead write proofs.
In addition to the libraries mentioned above, there's also some very good testing frameworks (which can run test suites composed of HUnit and Quickcheck tests), such as HSpec and tasty (and various extensions such as tasty-golden for golden tests and tasty-quickcheck for quickcheck tests). I've used these in various projects of mine; as with other languages, sometimes TDD is the right approach, sometimes you write a few tests afterwards to make sure everything works, and sometimes you can get away with no testing at all.
That said, I do feel that the type of testing I do in Haskell is often quite different from other languages. Due to the strict type system, some types of errors can be eliminated at compile time, and thus do not need to be tested for. Of course, as the parent pointed out, some people take this to a (very awesome) extreme, but even in a less rigorous form the static guarantees can eliminate many things that I would write unit tests for in other languages.
Finally, you may also be interested in SmallCheck, an alternative to QuickCheck. From its documentation:
> The big difference [between SmallCheck and QuickCheck] is that instead of using a sample of randomly generated values, SmallCheck tests properties for all the finitely many values up to some depth, progressively increasing the depth used. For data values, depth means depth of construction. For functional values, it is a measure combining the depth to which arguments may be evaluated and the depth of possible results.
This means that SmallCheck will often find small and interesting counterexamples to the properties you wish to affirm. (As an aside, note that writing property-based tests is a skill, just like writing good unit tests.)
tl;dr: Testing can sometimes be less necessary, but is certainly not unnecessary, and there are some very high quality and well-documented libraries for testing in Haskell.
I think it is true that unit tests are not so necessary compared to Java, but I think many in the Haskell community (myself included) have a tendency to rely too heavily on this. Tests aren't unnecessary in Haskell.
gamegoblin is absolutely correct that there is some great tooling for tests in Haskell.
My experience is that you can generally skate along on Haskell's type system and get further into a project without hitting the complexity threshold where you need tests to confidently make improvements.
Bigger and longer-lived projects benefit enormously from an automated test suite no matter what they're written in.
That said, when you refactor a program in Haskell, instead of running the tests and some functionality falling, usually the compiler itself will tell you there's a problem at line X of file Y.
That use case of tests is just not there - and that's the main selling point of unit tests. Thus, I'd expect Haskell developers to not write as much of this kind of tests. But again, you'll always need some kind of it.
I think unit tests are overrated. No, not because testing is overrated, but because unit tests aren't the only way you can test software. QuickCheck for Haskell is a fuzzy testing library, for example.
Honestly, is functional programming really better for this type of game? I find the structure and nature of the game more fitting to an object oriented approach. IMO Chips, blocks, player character all tend to be better described as mutable objects, because that's what they actually are from the users perspective....
Haskell is a general-purpose programming language, and discrete simulation-type systems like Chip's Challenge can be very elegantly modelled with nice things like algebraic data types, pattern matching, etc.
The code is right there on GitHub, although if you're not familiar with Haskell (and lens), it might seem cryptic. But you'll see that it is actually mostly written in an imperative style. The logic is mostly pattern matching on events to generate record updates. This is a common style in Haskell and fits the domain well.
There are other more elegantly value-oriented ways of writing simulations in Haskell; look up functional reactive programming.
In addition to the other fine replies: As a game gets bigger and more complicated, you'll often find that the direct mutable OO approach becomes an antipattern at scale.
Do you have anything that you want to do, but discover halfway through for some reason it's not legal? Good luck undoing what was half done. Of course you might think the simple answer is "don't do that", but as the game gets bigger and responsibilities get spread out between all the objects it becomes increasingly easy for some action to become possible that requires a lot of checking, and if things are just willy-nilly mutating themselves the odds of you encountering this sort of problem skyrocket. And that's just one example... games by their nature resist the sort of decomposition that most "real" software can have done on it, because as soon as you tell a designer that X can't talk to Y, by golly they'll start thinking about how awesome it would be if X was talking to Y. You don't get a pretty tree of relationships with occasional crosstalk, you get a nasty snarl of communication patterns, irreducibly.
Many games end up functional under the hood anyhow, with a clearly-defined loop:
1. Ask all the objects in the world what they "want" to do.
2. Validate and resolve conflicts.
3. Render the new world state.
One of OO's big problems in general is lack of control over the profound, often-overpowerful effects of mutation, and used in an undisciplined manner, games often turn into a worst-case-scenario for OO. And most of using them in a "disciplined" manner turns out to be using "FP" techniques to keep the complexity under control.
Of course nothing stops you from doing that in OO, with discipline and care unenforced by the compiler. But what it does mean is that suddenly the "mismatch" that you're referring to disappears, and one starts wondering whether perhaps FP is the better match for games and OO the accident of history.
I might also point out that OO and FP are not strictly at odds. You can use object libraries in FP languages to encapsulate the state and logic but avoid the mutation of the state.
So if the two issues are orthogonal then you're both right. :-)
At the risk of going really philosophical, in the end words are just words, and your program is what it is. Given the vast, vast, vast space of possible programs it should come as no surprise that little tiny English words are hardly up to the task of completely rigidly classifying them. Especially programs, which are quite literally the most perverse mathematical constructs known to man, courtesy of the Church-Turing thesis.
You just have to change the way you think about the game. Rather than viewing the game as a wholistic "Game" object which changes state as the game progresses, view it instead as a series of game states or frames which are immutable.
Then you define a function that is like (in C-ish):
GameState tick(GameState current, Action playerMove) {
... set up the next frame and return it ...
}
And a function to play like:
void play() {
play(initialState());
}
void play(GameState current) {
if(current.gameOver()) return;
Action playerMove = getPlayerMove();
GameState next = tick(current, playerMove);
play(next); // In Haskell and C on higher optimizations, this tail recursive call will get compiled to a loop
}
Of course it can get a bit more complicated depending on the context, but you get the idea.
So I'd say functional programming is neither better nor worse. A lot of people get hung up on the idea of immutability, but really I've found it to be a non-issue.
> Chips, blocks, player character all tend to be better described as mutable objects
That is just one way to look at it. You could also say that they can all be described by the universe's state at a point in time. Then, you'd have a function that takes the old state and some actions to produce a new state, by recursively doing the same to each object within the universe.
This wording sounds remarkably similar to quantum mechanics. I wonder how far you could take the idea of building a game out of state vectors and linear transformations.
State vectors and linear transformations are by no means unique to quantum mechanics. A lot of dynamic systems can be described this way: robot arms, chemical plants, airplanes, even network dynamics.
Naively I can guess how robot arms and networks could be spoken of in terms of states (and perhaps even chemical plants), but airplanes? Do you have any more information on the topic?
The technique is called a state space model [0]. A complete airplane might have been a bit hyperbolic, but as long as the assumption of linear dynamics is allowable, extremely large systems with multiple inputs and outputs and thousands of states can be simulated. What's more, there are some elegant and practical ways to find controllers with mathematically proven performance (like LQG control, or Model Predictive Control). This field of study has lead to the Kalman filter, which might ring a bell.
> This wording sounds remarkably similar to quantum mechanics. I wonder how far you could take the idea of building a game out of state vectors and linear transformations.
That's an interesting idea (and might fit in with "math / science gamification" software). Terry Tao used the opposite point of view, describing quantum mechanics in terms of video games: http://terrytao.wordpress.com/2007/02/26/quantum-mechanics-a....
Game author here. Honestly, I had the same concern when I started out. But Haskell turned out to be a surprisingly good fit. I made a short video explaining the code: http://vimeo.com/109663514
It skips a little in the beginning, but I think you might find it interesting.
There are some advantages to thinking this way. You get replay functionality "for free" (even if you never expose this to users, it makes debugging much easier), and it becomes much easier to think about how the game will work in a network situation (where each client has its own "copy" of the world, and the two copies might be "out of sync"). I've written initially-imperative games that became more and more functional as I started to debug network problems.
I mean being able to store a full game in concise form (by just storing user input) and then "replay" it - show what happened in the game, step forwards and backwards in time.
You really don't get that for free though, since you saved versions of everything you'd quickly exhaust the heap. Also, one could bother with delta checkpointing (only copy dirty pages) to get something like this in an imperative language, or build a programming model especially suited for it [1]. Functional programming doesn't really make this much easier (even if you use persistent data structures).
I saw the demo yesterday; it looked like you still had state in the form of a world value (at least storing Mario's physical state information). Those world values would have to be kept around.
Elm code is only pure in that it doesn't have implicit side effects, it definitely has state.
However, if you have deterministic execution, you can always just re-execute from the beginning to get the same execution. This works for functional and non-functional code.
Yeah, on the topic of [1], that's basically FRP in a different wrapper and using different terminology. Fundamentally there's little difference between the two approaches, you're just hiding some of the details that FRP normally exposes.
> You really don't get that for free though, since you saved versions of everything you'd quickly exhaust the heap.
Why would you save versions of everything? That sounds like a pessimistic strategy that you would need in a stateful environment where anything can be destructively updated at any time, which doesn't apply in this case.
> , or build a programming model especially suited for it [1].
Sorry if I don't take the claim "especially suited for it" at face value, seeing as you are the co-author and you are criticizing something which seems to try to achieve the same thing in a different manner. Maybe that's unfair of me.
FYI I'm in no way challenging the impressive nature of making this game with Haskell. I'm just challenging the hype around the style and asking whether it really is the most efficient solution to programming a game...
It isn't really about efficient, it's about making it simpler to reason about. Functional style is a different way of reasoning about state than OO, but they're both fundamentally about abstracting state and making it simpler to work with and reason about.
OO takes the divide and conquer approach, by dividing state into progressively smaller chunks until each piece is small enough to reason about. The flaw with that is that at some level you need to understand and reason about the entirety of the state, and if you have a bug in some lower level piece it often exhibits as bad behavior in one of the higher level pieces. Figuring out which of the lower level pieces is ultimately responsible is 90%+ of the time spent on debugging in a OO program. Another challenge is that because state is mutable, just finding the starting point to debug the problem is often a chore because you must first get the system into the state necessary to trigger the bug. Even worse is that often the state you initially detect the bug in, is actually significantly further along than where the bug originates so you end up having to attempt to progressively walk backwards through time to arrive at the originating point (which in practice means running a debug session to a point, finding the next piece of state that looks wrong, setting new breakpoints, and then repeating the whole exercise).
Functional programming takes a different tack by instead strongly discouraging state altogether. A well written functional program will contain only the absolute bare minimum state necessary to accomplish the task at hand. Further state tends to be divided into two distinct camps, the global state that's referenced by name (largely non-existent barring mutable IO refs and such), and the local state that's passed explicitly as function arguments (ignoring for the moment the monad abstraction around that). Like with OO you sometimes arrive at a point where behavior isn't what you expect it to be and need to go to a debugger to figure out where the state has deviated from the expected. Unlike in OO however you have the call stack to reference and can see exactly where in the history the problem originated. Finding the source of a bug in a large functional program tends to be rather trivial compared to the similar task in a large OO program, due to the more concise and limited state that exists in functional programs.
I think from a perspective of learning functional approaches it's useful to have things like this, as a more complex worked example of applying functional concepts to a problem. That is to say it seems like a broader version of "You know how in OO you made a 'Car' object with an 'Engine' with attributes and functions, well this is how it's done functionally".
Chips was one of the first applications I ever used as a kid. It was Windows 3.1, and I distinctly remember knowing how to start `windows` from the DOS prompt.
You might also be interested in this framework if you want to make games of your own: https://github.com/egonSchiele/actionkid