Hacker News .hnnew | past | comments | ask | show | jobs | submitlogin

Could an OCaml expert give a quick take on the view that if FP, why not go all the way and do Haskell instead? I mean, if "correct, efficient, beautiful" are attributes of OCaml (and I know opinions differ, but let's assume for a moment..) then shouldn't they be attributes of Haskell too, maybe even more so in some ways?


FP is great but not necessarily at all costs.

OCaml is immediate by default instead of lazy, and allows imperative code with side-effects. Both escape hatches from the pure FP world.

So, performance is easier to reason about and you can interact with your side-effecty real world stuff without having to reorganize your whole program around the correct monad.

Most of the time you want your loops to be higher order functions but once in awhile you want to just build a vector from a for loop. OCaml let's you do it without it being a whole intervention.


Also, a major differentiator is that Haskell deals with effects primarily by using monads, whereas many modern functional languages (e.g. Koka or Flix) are being designed from the ground up to use algebraic effects instead. OCaml is also embracing effects. Haskell has some effect libraries as well, but monadic code is everywhere. IMHO, as someone who loves Haskell, algebraic effects will make FP much more approachable.


It's really worth noting that one of the biggest real world Haskell codebases in the world (ie Standard Chartered) wrote their own compiler to make Haskell evaluation strict and make it easier to do interop with C++

Laziness by default was definitely an opinionated design choice for a language when using it in production


When I worked for Standard Chartered I was told that Mu, the compiler in question, was only strict because it was originally written to target an existing strict runtime (called Lambda, and Mu is the next letter in the Greek alphabet), not because they particularly wanted a strict language.


I really like Ocaml, but the error handling is very bad, some libraries use Option/Result, some use exceptions, the inconsistency makes it a little hard to work with. I much prefer Rust in this regard.


doesn't the Rust ecosystem also inconsistently mix Option and Result?

anyway, it's a fairly trivial wrapper to handle the odd annoying thing that raises

Option.try_with (fun () -> thing_that_exns ())

Result.try_with (fun () -> thing_that_exns ())

(it would be nice if you could tell if something raises by the type signature somehow though)


What about Haskell STM versus OCaml Multicore Eio?


Kcas is the Haskell STM analogue in OCaml https://github.com/ocaml-multicore/kcas/


I started with SML in the 1980's, implementing a core math algorithm (Grobner bases) used in my K&R C computer algebra system Macaulay. Then I got this idea there should be a related algorithm in a different problem domain (Hilbert bases) and I managed to convert my code in twenty minutes. It ran. This completely blew my mind, on par with switching from punched card Fortran to an APL terminal in the 1970's.

Everyone talks a good line about more powerful, expressive programming languages till they need to put in the work. Ten years effort to program like ten people the rest of your life? I'm 69 now, I can see how such an investment pays off.

I moved to OCaml. Studying their library source code is the best intro ever to functional programming, but I felt your "all the way" pull and switched to Haskell. (Monads make explicit what a C program is doing anyway. Explicit means the compiler can better help. What it comes down to is making programming feel like thinking about algebra. This is only an advantage if one is receptive to the experience.)

I'm about to commit to Lean 4 as "going all the way". Early in AI pair programming, I tested a dozen languages including these with a challenging parallel test project, and concluded that AI couldn't handle Lean 4. It keeps trying to write proofs, despite Lean's excellence as a general purpose programming language, better than Haskell. That would be like asking for help with Ruby, and AI assuming you want to write a web server.

I now pay $200 a month for Anthropic Max access to Claude Code Opus 4 (regularly hitting limits) having committed to Swift (C meets Ruby, again not just for macOS apps, same category error) so I could have first class access to macOS graphics for my 3-manifold topology research. Alas, you can only build so high a building with stone, I need the abstraction leverage of best-in-category functional languages.

It turns out that Opus 4 can program in Lean 4, which I find more beautiful than any of the dozens of languages I've tried over a lifetime. Even Scheme with parentheses removal done right, and with far more powerful abstractions.


Have you looked at Idris 2?

I'm 53, impressed that you're still going at it at 69!


Yes. I'm impressed with Idris 2. I love how it uses Chez Scheme, my favorite scheme implementation. I contributed for a bit to getting Idris installation working on Apple Silicon Macs based on Racket's port of Chez Scheme, only to learn that I was working with Idris instructions that hadn't been updated.

Lean 4 is a better supported effort, with traction among mathematicians because of the math formalization goal.

I have more reasons to want to learn Lean 4. Peel away their syntax, and Lean 4 proofs are the word problem for typed trees with recursion. I find the reliance of AI on artificial neurons as arbitrary as so many advanced life forms on Earth sharing the same paltry code base for eyes, a nose, a mouth, and GI tracts. Just as many physicists see life as inevitable, in a billion runs of our simulation I'm sure AI would arise based on many foundations. Our AI fakes recursion effectively using many layers, but staring at the elementary particles that make up Lean proofs one sees a reification of thought itself, with recursion a native feature. I have to believe this would make a stronger foundation for AI.

I don't get that same rush looking at Idris. Using Lean 4 for general purpose programming? It must be good training.


I'll have to have a look at Lean 4 then.

The simulation hypothesis has a flaw IMHO- If it is modelable and therefore computable, it may be subject to the halting problem


As someone who has lightly used Haskell and quite heavily used ocaml:

- Pragmatism: The ocaml community has a stronger focus on practical real projects. This is shown by the kind of packages available on the ecosystem and the way those packages are presented. (A number of Haskell packages I've tried to use often seen to be primarily intellectual pursuit with little documentation on actually using them).

- Simplicity: Haskell does have some amazing features, but it has so many different ways to do things, so many compiler flags, code can look vastly different from one codebase to another. It's kind of the c++ of FP. Ocaml is definitely more cohesive in its approach.

Tooling: last I used Haskell the whole ecosystem was pretty rough, with two package managers and a lot of complexity in just setting up a project. Ocaml is not at the level of rust or other modern languages, but is definitely a stop above I'd say.


Pros and cons as usual. I worked with both languages professionally but I personally find OCaml more practical, better at programming in the large and easier to write maintainable code. It's a simpler language overall (side effects, strict evaluation). I find the language finds sweet spot between these attributes whereas Haskell is more abstract.

That being said, Haskell is pretty nice as well but I'd pick OCaml for real world stuff.

One thing that bothered me with both these languages is that people not fluent with FP could write code that isn't idiomatic at all. It's probably a bit harder to do in Haskell.


At least theoretically they could be however OCaml is in large part driven by Jane Street and has been for some time now and Jane Street's entire business model is built around optimizing for ultra high throughput, ultra low latency software where mistakes could cost on the order of hundreds of billions of dollars.

So my guess would be less that Haskell is not these things (nor couldn't it be) but rather that OCaml has had the external forces necessary to optimise for these things above all else.


Is OCaml lazy? I'm not an expert, but if you want ultra high throughput, you might not want lazy. If I understand correctly, in Haskell some nonobvious things can slow you down because of the laziness.



How does laziness cause a slowdown? It merely reorders work done, it doesn't necessarily create more work.


To do lazyness, you need to accumulate "intermediate computation objects" called thunks. These take space and time to store.

See: https://wiki.haskell.org/index.php?title=Thunk https://wiki.haskell.org/index.php?title=Performance/Strictn...


No, OCaml is not lazy.


Personally, I don't like how Haskell considers things like I/O as side effects that have be wrapped in monads. Ocaml feels much more practical.

Plus, though both languages allow defining new infix operators, ocaml coders are much more restrained about it than haskellers, and I hate having to figure out what >>>>$=<<< does when it shows up in code.


*. sends its regards

:)


The person to look to for explanations of why OCaml/SML and not Haskell is Bob Harper. For example, the module system vs ad hoc polymorphism: https://existentialtype.wordpress.com/2011/04/16/modules-mat... He also has in-depth critiques of laziness-by-default but the one link I found is a 404.


If you'd like to see Bob Harper's take on programming languages, have a look at the short video series Practical Foundations for Programming Languages

https://www.youtube.com/playlist?list=PL0DsGHMPLUWVy9PjI9jOS...


My take is that OCaml lets you sneak a little mutation in, with a little effort, which can make a huge difference in the performance of some algorithms.


Yeah. I agree, I have shaved of a few seconds on my algorithm computation through mutation. I felt like I cheated though.


I'm not an OCaml or Haskell expert, but I expect that laziness makes performance harder to reason about?

Edit: but in this case, apparently the book was written for a course at Cornell where they teach both functional and imperative programming using the same language.


In my experience, reasoning about (or maybe being able to manage) memory consumption is more of an issue for fully lazy languages.

Having worked on a code-base written in a lazy (Haskel-like) FPL where the architecture relied heavily on laziness, the least pleasant issue to deal with was sudden and obscure explosions in memory consumption.

Which would of course have a big impact on performance so maybe this was your point also.


That’s been my experience. Across the spectrum of languages I’ve found certain features such as dynamic memory, managed memory, memory safety, evaluation model, etc all have an impact on the transparency of understanding time and space characteristics. I spend most of my time in assembly and C, but love the sheer ranges of language options we have today. I put Haskell in the “Miranda” branch of languages which I love for tackling some problems but day to day but I’ve never got a handle on how that translates into predictable characteristics of the generated code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: