> that highly paid programmers in the US also brought in a ton of profit
In Germany for instance I've seen many a company that treated their programmers as a cost center and they actually were (probably a mutually reinforcing self-fulfilling prophecy).
Too many instances of programmers being deployed in such a way that I couldn't possibly see a way that they would get back even that meagre investment that was being made. Fully irrational dev teams doing useless busy work.
Most German "startups" used to be replaceable with Zapier and Pipedrive. That has probably only gotten worse with the advent of LLMs.
I've read more than my fair share of these tutorials, and I'd like to be proven wrong here but I don't think I've ever seen one that explains what the point of these functional constructs (similarly with Applicative etc.) is.
"You can do IO now." So what? I could do IO before that as well.
Very rarely are practical explanations discussed. Even if they are discussed, the treatment is shallow and useless.
You may appreciate my own contribution, https://www.jerf.org/iri/post/2958/ , which includes an entire section titled "If They're So Wonderful Why Aren't They In My Favorite Language?", a section explaining why IO is not a good lens to understand monads and why "monads" don't really have anything to do with "making IO possible" (very common misconception), as well as what I believe to be one of the more practical applications of monads as a way of generating an audit log of how a particular value came to be what it is without. That example specifically arose from one of the rare instances I used the monad pattern in my own real code. Though I still didn't abstract out the monad interface, because if you only have one, that does you no good. The entire point of an interface is to have multiple implementations. It just happens to be a data type that could have implemented the monad interface, if there had been any use for such a thing in my code, which there wasn't.
As I understand it, one thing the tutorial didn't go into, which I think is an important subtlety, is that it's not enough to have an implementation of "bind" to have a monad interface. You also need an implementation of "return : a -> m a" (i.e. a way of making sources of 'a's when given an 'a'), AND a proof that these implementations together satisfy the monad laws (i.e. that they "play nicely" together).
Without all three components, you can have something that "looks like" a monad, in that it has definitions for "bind" and "return", but isn't actually one, because those particular definitions don't also satisfy the monad laws.
Per the very last section, I chose to elide those. That's mostly because few languages worry about "laws" anyhow, and the lawfulness is less consequential in a non-lazy language because even if you nominally screw up the lawfulness, the code will still reliably do whatever it does. While I suspect we could find a pretty solid plurality of HN readers to be at least somewhat appalled at the idea, I think the generally programming world is not generally worried about it.
Plus it's rather like giving out criteria for how the frosting on the cake will be judged when most of the contestants are submitting piles of slightly dampened raw flour with an egg cracked over it and being offended when you won't agree that's a "cake".
Ah yep - I missed that mention of "lawfulness" in the last section. I guess the minor gripe I have is that that really isn't anything to do with Haskell: it's that you only have a monadic interface when the laws are satisfied (and Haskell itself doesn't, and can't, enforce the laws: you have to check them / others using your interface have to trust that you've checked them.)
I don't quite follow why you're making a distinction for non-lazy languages?
If you want to actually use any generic monad combinators with your monad interface, and expect it to behave sensibly, then the laws had better be satisfied!
But yeah... Nice article, and I really liked your "Noun / Adjective" distinction.
My understanding, which may be incorrect, is that the major reason that lawfulness matters in Haskell is the laziness makes it so that unlawful monads won't just do "something that violates the laws", an abstract, mathematical consideration that maybe you care about, maybe you don't, but that the combination of the laziness and the aggressively optimizing compiler means that result will be very unpredictable, and slight and seemingly isomorphic source changes can result in unpredictable results.
In Python, if I write an iterator on something pretending to be a list, and when it sees strings it doesn't just return an uppercased string but actually modifies the contents of the list to be uppercased, that's stupid, but at least since it's a strict language that isn't interleaved with IO and all the other stuff flying around in Haskell it will be consistently stupid. It isn't going to blow up or behave differently if I accidentally flip an "a + b" into a "b + a" somewhere.
It's bad, but Haskell has a whole different level of bad if you screw with it and don't play within the sandbox.
There is a definite "I'm being more pragmatic here than the average Haskell programmer" effect going on here. I... how to put this... "won't blink" is too strong, but... if I need to violate a law, if I need to write something like the stupid iterator above, I am in fact willing to. I have the decency to feel bad about it, and there will be extensive and probably bitingly sarcastic comments attached to it, but I'll do it. (Generally only when I don't control one end of the source code, though. If I have full control I never do anything that stupid.) But in Haskell it's a particularly bad idea, mostly because of the laziness and its interaction with other things.
And, heh, in a world where the struggle to explain what monads even are to people, monad combinators aren't even on my horizon.
Supporting "Option" is not "having monad". An Option data type can implement a Monad interface, but you can have an Option data type with no particular monad support in your language, or you can have an Option data type that implements something like "bind" or "join" but there's no interface that it conforms to.
If that sounds like gibberish it's because you don't have the right definitions loaded into your head. You can read the article I linked to fix that.
In this case note that what you are calling "Option" is called "Maybe" in Haskell and also in that article. There is an entire subsection explaining why using Maybe/Option as a lens to understand "monad" is a bad idea because by monad standards, it's degenerate, and degenerate instances of an interface make for bad examples. Just as if you're going to explain "iterator" to someone, starting out with "the iterator that returns nothing" isn't really a good idea, because it's not good to try to explain a concept with something that right out of the gate in some sense denies everything about that concept.
It's a common mistake. There's also some people who think that by adding flatmap to their list/array data type they've "implemented monads". No, they've just implemented flatmap on their list/array; they don't "support monads" by doing that. There are plenty of monad implementations that can't be understand as "flatmap", such as STM. ("flatmap" completely fails to capture the idea that a monad implementation may carry around additional data not visible from the level you're using the implementation on. That's one of the main reasons my example is structured the way it is in the article.) "flatmap" isn't "monad" in exactly the same way that "walk the next item in the array" isn't "iterator", or even more simply, "red" isn't the same as "color". Flatmap is an implementation of monad, walk the next item in the array is an implementation of iterator, red is an implementation of color.
Very few languages let you write a function that works for both Option and for other not particularly related monadic types (e.g. Future), while being fully typesafe, which is what I'd call "having monads".
Many languages have monads “by accident”, e.g JavaScript by way of Array.flatMap() - but the fact this type happen to satisfy the monad rules is not particularly useful.
Nobody will explain you like this, but the main point was being able to satisfy the compiler without introducing an escape hatch into the language.
Haskell is based on Miranda, and Miranda is based on Hope. Purely functional languages were really purely functional, academic experiments with no way to express side effects, so no way to express practical programs.
Philip Wadler took the monad (the name that already existed in category theory), and showed how computations could be expressed in Haskell with the “do notation” as an example. That made Haskell practical without breaking the “beauty” of the language, by having to introduce new special syntax or something outside the type checker capacity.
So, I don’t think there’s a motivation besides being an exercise in expressivity within the limitations of pure functional programming. Similar ideas in describing computation as lazy executed instructions already existed elsewhere, like the interpreter pattern.
> and showed how computations could be expressed in Haskell with the “do notation” as an example
To be clear, do notation is new special syntax that was added to make monads more ergonomic. Traditionally you used >> or >>=, which looks a lot more like closures.
The point is rather that in a pure language, each io operation needs to be dependent on a sort of "world state" which is updated for each operation. They chose to implement this state as the io monad but there could have been other ways.
Think of Monad (and really the whole typeclassopedia in a way) like Iterable in Java
what does it gives you? for loops
what do the haskell things give you? various types of for loops. monads in particular have `do` which is a language construct. but the rest are just higher order functions that are specific types of `for` loops
you learn a handful of these and then all programs are the same. you can whisk together complex control flow across domains with the same few abstractions. you can hop into a library, see these type class instances, and know how to use the library.
From the very beginning of the article (level 1), I don't see what's wrong with code that looks like the following. Early return seems to fix the "typing this makes me feel ill" part? To me, the following code seems perfectly readable without requiring the reader to know about function composition.
I find that pretty repetitive, but more, having to reason about branching control flow adds a lot of mental overhead that I'd rather spend on my business logic.
From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code. That, and compatibility with transformers.
Whether this is the best thing since sliced bread or not, is left as an exercise to the reader.
Hah, I like that: the main benefit of monads is turning your functional language back into an imperative one...
IMO it's because option is a monad, list is a monad, io is a monad, async is a monad, try-except is a monad, why invent different magic syntax and semantics for all of them when there's a perfectly good abstraction that covers the lot, and that lets you write functions that are agnostic to which particular monad they're in to boot.
> From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code.
Nah, I don't even use the syntax much any more. The main benefit is the huge library ecosystem that works generically with any monad, so that if you want to e.g. traverse over a datastructure with your effectful action you can just use cataM or whatnot from recursion-schemes instead of writing it yourself, if you want to compose pipelines of them you just use Conduit, etc.
Within most languages, you're operating at a semantic level where much of the "point" is already obviated for you. They deal with fundamental structure that you take completely for granted, and you use it all implicitly. A monad is very simple at the core of it, it's an ordered collection, flattened into a single context. What you're collecting, what that ordering means, what that context is, etc. define what the monad is used for.
You could do IO? IO requires temporal ordering. Take for instance:
print("Hello ")
print("World!\n")
Would obviously result in:
Hello World!
But would it? You are implicitly assuming that the first line will be evaluated and print before the second. It's a reasonable assumption to make, most programming languages embed that in their execution semantics. What if I told you that the assumption isn't actually guaranteed? What if we didn't give that temporal ordering in the same way? What if for instance, a function could return a result without evaluating its arguments? This is called non-strict evaluation (note: this does not necessarily mean lazy evaluation). In the case of a non-strict language, you would need some way to tell the program that the first line should happen before the second before you can do any kind of IO. For a strict language, the IO monad doesn't make sense because you don't need to tell the program that.
Haskell is almost like a metalanguage. You're describing a program, but it's not like describing a program in Python or Scheme. You are expressing a program in graph reduction, and that's very different compared to how you're used to thinking of computer programs. That's the practical reason why Haskell has the IO and State monads, because they reify as a temporal grounding for instructions. Your program has a completely different concept of flow than in the real world, and these are tools you have to bridge that gap. It's important to note, this is just a very specific usecase of monads.
If you find treatment to be shallow, it's probably because you're looking for answers in shallow contexts. I used to be as confused as you, and the answer I eventually discovered is because I was ignorant of my own ignorance. I needed a healthy dose of computational philosophy to broach the subject. As someone else has said, once you understand it, it can be hard to explain it to someone who doesn't understand it. It's not a short topic to be learned in a series of twitter posts or a blog. It's something you come to understand after a lot of exposure and study and careful rumination. And of course, primary sources.
A joke says that its because once you get it, you lose the ability to explain it like a normal person :)
And another joke says the best way to explain a monad tutorial is to write another one, so sorry for this.
Just think of it as a box.
If amazon sent items themselves, it would be hard to pack, no way to standardize, things would break often or fall out of their respective boxes.
Now, if you put it into one of the standardized boxes, that makes things 100x easier. Now you can put these on a conveyor belt, now you can have robots sorting these, now you can use tape to close them, standardization becomes easy as it's not "t-shirt,tennis ball,drill" but just "box box box".
So now you can do all kinds of things because it's all a box. And you can also stress test the box.
It's the same with these.
A. You can just have a function that: calls a something on IO, maps it's values, does a calculation, retries if wrong, stores the result, spits it out.
Or B. you can have functions that calls any function on IO, functions that map any value to any other value, functions that take any other function and if that function fails calls another function or retries, one that stores any value given to it and returns with information if it saved or not etc.
The result is the same in the end, but while 1 makes the workflow be strictly defined only for that case, and now you have to handle every turn and twist manually (did the save save? what if not? write a check, write a test that ensures its not and the check works, same if it does...) the 2 lets you define workflows with pre-tested, pre-built blocks that work with any part of your codebase.
And it makes your life 1000x easier because now you have common components that work with any data type inside your codebase, do things your way always, are 100% tested and make it easier to handle good cases, bad cases, wiring and logistics. And you can build pipelines out of them. Because at the end, what it does is just lets you chain functions that return wrapped values.
And you end up with code like:
val profileData = asAsync { network.userData(userId) }
//returns a Async<Result<UserData, Error>
.withRetries(3) // Works on Async, and returns Result, retries async if fails
.withTraceId(userId) //wrapped flatmap that wraps success into Trace<T> and adds a traceId
.mapTrace(onError = { ErrorMappingProfile }, { user -> Profile(user.name, user.profileId) } // our mapTrace is a flatMap for Trace objects, so it knows how to extract trace objects, call the functions and wrap them again
.store("profile_data") //wrapped mapCatching again for storage explicitly that works on Trace objects, knows how to unwrap them, stores them,
.logInto(ourLogger) // maps trace objects into shared logger
Each of these things would before have to be manually written inside the function, the whole function tested for each edge case. if/else's, try/catch, match/when/switch.
This way, only thing you need to cover with tests now is `network.userData()`, as all other parts are already tested, written and do what they say they do. And you can reuse this everywhere in your projects. Instead of being a function you call with data, it becomes a function you give a box and it returns a box. Then you can give it to any other function that needs a box. If boxes make no sense, think of the little connectors on lego bricks, or pipe connectors in plumbing, or stacking USB adapters or power strips.
I can't stress enough how much this approach helped me in real life cases - refactoring old codebases especially, as once you establish some base primitives, the surface area starts massively collapsing as the test surface area increases.
That said, I've never really understood the enthusiasm the industry has for introducing Monads outside of Haskell. As I understand it, at the time Philip Wadler wrote his paper, Haskell was pretty painful to use due to its adherence to purity. Monads were presented as a way to maintain purity while providing a principled way to support all kinds of effectful computations. But without some of the features Haskell provides (I'm thinking of typeclasses and HKTs in particular), and given that almost any language you'll be introduced to outside of Haskell already has ways to do e.g. IO or whatnot, it almost always ends up feeling like bolting something on with not a lot of benefit.
Don't get me wrong, I think there's value in stuff like https://github.com/fantasyland/fantasy-land --I find organizing how I think about computations around these algebraic concepts helps me a lot, personally. But that's distinct from introducing these concepts into day-to-day work in a non-Haskell language, especially on a team, which is often more trouble than it's worth unless everyone has already bought into it and is willing to deal with the meaningful friction introducing this stuff produces.
I assume the overabundance of Monad tutorials and libraries has to do with the cachet of knowing this relatively obscure, intellectual thing and being able to explain it to your peers, or to be more charitable, perhaps it's a byproduct of getting excited about learning this new, distinct way to approach computation and wanting to share it with everyone. But the end result is that now we have tons of ridiculous tutorials and useless Monad libraries in tons of languages.
It happens all the time. For a brief period I understood musical notation and rhythm and then it was gone. Similarly I had a time in my life where I knew by feeling whether a French noun was le or la.
The sneaky thing here is that understanding, and knowledge in general, disappear silently. So you don't notice it when you unlearn and forget something. Only if a situation comes up where you need that understanding again do you notice that it is gone. Coming to understand something is conscious, losing that understanding is unconscious.
In what sense is bluesky irrelevant in this context? It's obviously not Twitter scale, but no alternative to GitHub will be GitHub scale for a long time to come either...
And it does seem to have the right feature set. Not sure which other social graph/network you could reasonably build a GitHub alternative around that would be less irrelevant....
It's largely perceived to be an ideological site. Obviously every community has its own biases and tastes, but I think Bluesky has just captured the imagination as the "left-leaning social platform." When the NYT was talking about a potential link between the WHCD shooter and Bluesky posts, that's what they referred to Bluesky as.
Obviously Tangled can live completely separate from Bluesky, it doesn't even need to share branding. Protocols are just protocols and people who don't understand how email works often don't even realize that Outlook and GMail use the same protocols. I'm hoping for this future personally where ATProto is only something the nerds care about (and write code for.)
(Please don't respond to this post with ideological argument. I'm just trying to talk about Bluesky and ATProto.)
That may be the case, but anyone can use ATProto. Unlike X where reach is suppressed for ideological motivations, or Mastodon with the federation turf wars, anyone can use it, regardless of their politics. If you disagree with the ideology of the majority users and avoid it for that reason, it just perpetuates the problem.
Unfortunately, I suspect it is only that way at present because the "other side" is perfectly content to continue existing in a communications environment that prioritizes them, rather than one that is actually open.
Unlike Mastodon? What's the difference? Anyone can use AP regardless of politics, you just might get banned from other's infra the same as for ATProto.
I wonder how much this translates to places outside the US... Bluesky being the place for everything Center-left and left of it by US standards would just make it the place for mainstream opinion in much of the EU.
Personally I found it much easier to avoid politics on Bluesky than on other platforms. Which is why it's been more sticky for me than Twitter was. And I put that down to having good feed control, and not being beholden to an algorithm that tries to keep me engaged.
It doesn't. I don't really believe this meme of the center-left and left in the US being the mainstream in the EU. It's true that certain attitudes around labor and economics are shared between the American left and more center left and center EU parties, but our stances on social issues are completely different. There's an entire fabric of multiculturalism that's present in the US that just isn't in the EU that has a very different lens. For example, the US just doesn't have anything resembling an EU-style Christian Democratic party from a social values perspective at all.
Bluesky is mostly about day-to-day American politics, which means talking about how a court ruling is bad, how Trump did something stupid, or how the current admin is corrupt. The complaint I've read from most EU folks is how American day-to-day politics takes up way too much of the site.
I was unable to turn off politics without pretty much completely nuking my feed. I tried using mute words but that ended up just turning off most of my timeline. I build a US Politics labeler that worked pretty well, but ended up in a similar effect. Content outside of the politics on the network just isn't very interesting. Pretty much none of my hobbies are well represented there except some photography, and the photography is mostly about sharing pictures (which is definitely cool) rather than talking about shooting weddings or events or street the way it tends to shake up in other photography communities.
I mostly agree with you that the political landscapes are mostly extremely different, rather than just shifted. Incidentally, when I tried Twitter (pre take over) it was more the US centric activist left that drove me away.
(Edit: That said, in the previous comment I was primarily thinking about the fact that the Republican party now explicitly supports far right and populist parties in Europe that are firmly outside the mainstream.)
That said, it sounds like your problem is more that other stuff isn't there? I am an academic, I followed interesting people in my field, and I am mostly on the feed that just shows me stuff from people I follow (plus a few curated feeds). So I didn't try to actively block stuff, and I have enough content to spend ten minutes every other day on the site and find new and interesting things. So maybe it's the combination of the niche I am after and the fact that I don't want to spend too much time on social media anyway that makes bsky a good experience for me...
Ah yeah if you're an academic it makes sense presuming your niche is there. I'm looking for a more general hobby site and sadly Twitter and Reddit are still that to me.
Fair enough! We are a pretty small ecosystem all in all. I will say in Tangled's case their infrastructure is separate from Bluesky's for the most part, and the rest can be switched easily enough if ever needed.
One example is if you don't care anything about atproto, you can create a new account on Tangled's website that creates the account on their servers, but thanks to how atproto works it's just like you made one on Bluesky and can still interact with Tangled and everyone on the protocol for it's social features.
We're not discussing social networks though, this is about Git project hosting. Bluesky doesn't have to compete with Twitter, Facebook, Instagram, TikTok or any of that for Tangled to be useful.
Those were all isolated places, where you needed an account for that specific forum (whereas you can use your PDS anywhere), where that forum held your data (whereas you hold your atproto data, and we all internetwork to see the aggregate), where you were subject to the moderation decisions of that forum (whereas you have control over your PDS (but not other people's clients)).
Pretty unclear what your comment is trying to indicate but it sure feels very different to me, and I've offered some characterizations for why.
More generally, atproto is useful for all kinds of tech, solves a cold start social network problem. Aren't we reinventing forums, and tv watching, and book reviews, and trail maps, and photo sharing, and streaming, and d&d, and key attestation, and file sharing, and publishing, and note taking and containers and git hosting? Yes. Yes we are. https://atstore.fyi
(Under a common protocol set, in a way that respects users unlike everything else that's happened online so far.)
Fuck yes it is. Centralizing everything & owning all the data & giving users no rights is technically easy as hell. Walk in the fucking park!
It's also abjectly awful. Sites change hands, change owners, change terms of service, change moderators... sites shut down. Taking "your" content with them.
To accept centralized services is technically simple but it means accepting the infinitely long timeline of social and corporate complexity, that plays out, day by day & decade by decade.
As a user it is much simpler for me to have a PDS where my data is, that I control, and manage.
And it's all my data. I don't have to manage my data across thousands of different profiles on different sites. On that hostile unstable ground I've spoken of, each of these sites evolving and devolving in their own multi-variate ways.
It's perhaps sometimes simpler to have experiences that only exist in one site. But it's much richer and more interesting when different apps can interoperate, can adversarially interoperate/competitive cooperation. I also think that for users, once you get the hang of it, the patterns of having a PDS emerge, and provide a consistency of experience across apps, where-as each site has to be learned on its own terms: it's simpler having a single sign on from your PDS, having tools you can manage all your data with. (https://pds.ls)
We have atproto experiences like the just launching Disperse that work across the different apps, that bring them together. Multiple different apps do bookmarking using the same lexion, some with their own add-ons. A single site can't give you that ability to work across systems, to work broadly. You are reduced to working on a site by site basis. That's simple until it becomes overwhelming, at which point better tools becomes simpler: having a PDS leaves the door open to making these better tools, that work more broadly, across systems.
https://bsky.app/profile/quillmatiq.com/post/3mkq2hzfjn22s
Whereas you might get stuck with a complex bad app hosted on one site, with atproto your data is yours to manage and you can use whatever app you please, and those different apps can attune to different user types: users who want a rich complex app might use one app, other folks can prefer a simpler less complicated app. Not being evolutionarily handcuffed to one company's app your whole life lets everyone explore not just what us "more complicated"/simpler, but a whole range of preferences that suits the user, that better matches what the user wants, that can grow and evolve with the users and their data, in a way that the "simple" centralized system by definition cannot enable.
It's so wild to me how much I see the dual of complexity. Rich Hickey talking about complecting is spiritual holy matter to me, is deeply cherished. But I see so so so many people who use aversion to complexity as a tool to turn off thinking, to not regard the complexity: endless Chesterton's Fences, no curiosity about why the fence might be there. There's such strong swelling of dark energy to tear down destroy and berate, the smugness of those so happy to criticize ram usage or to say software is all bad now. I dunno man. It really tires me out having such shallow engagement such superficial reactions.
Personally my soul craves a deeper connection with systems, wants to engage and explore, to sift what complexity brings interestingness from that complexity which is incidental. I've written a bunch of posts in this submission what I think is so so so interesting in the inherent choices of having an Authenticated Transfer Protocol, where you can send my data around over http, or web sockets, or MoQT, or IP over avian, or tin cans and shoe string, and still have it be clear to the world that that data is mine. Very few other systems offer that. ActivityPub certainly doesn't practice that widely, maybe doesn't offer that at all. All these systems we've built assume a host that stays online forever, that owns the responsibility fully. It's wicked deeply compelling to me to march into some complexity to liberate us from this ancient bitter internet constraint. Hello & thank you, complexity.
The entrenchment here is ridiculous. The average software engineer does not care at all about their vcs or their forge and their knowledge of both is extremely shallow.
For the people who want to do their work and get on with their life, it does not really matter that much.
Exactly. I run my own Gitea server, but put my stuff on Github, because that's where the people are. Self-hosting an MP3 is not the same as being on Spotify.
I keep my open source work on github for similar reasons. I don't expect nor want to deal with contributors having to create accounts on a self-hosted forge for every individual project they work on.
The same thing happened to Twitter. All the online properties we used will be gutted and sunsetted eventually. The only thing we can do is move on and slash and burn a new pasture.
reply