There was an Erlang Day sequel later that summer: https://hackernews.hn/front?day=2009-08-20, which I remember because I unintentionally started it and pg got mad at me.
IIRC, the front page before that was filled with stories about _why (they're split between https://hackernews.hn/front?day=2009-08-19 and https://hackernews.hn/front?day=2009-08-20 now), and people were complaining about that. I happened to have an Erlang article I'd already been meaning to post, so I posted it and it triggered a deluge, followed by complaints:
I always wondered why I couldn't find any trace of this in the archives, as I distinctly remember the scolding I got! I've gone and unkilled them now, which should restore roughly what it looked like.
I had a similar problem loading the page on Firefox for desktop with private browsing. It turns out service workers don't work in private browsing, which it seems Bene (the software rendering the page) requires. Switching to a normal Firefox window solved the problem.
For packages where I don't include tests, I've had at least one downstream distro maintainer request that I include tests, since at least some of them treat npm or PyPI or whatever as the source of releases.
For packages where I do include tests, I've had at least one user request that I remove tests so that the footprint of the Docker image they're building is smaller.
Both are entirely reasonable requests, but package repositories don't really provide a good way of accommodating both at the same time, for instance, by allowing a separate upload of the dev gubbins such as tests.
As a software maintainer myself, I believe the downstream distro maintainer is the one being wrong there.
You have a software project, with a build process, and the "output" or final product of that project is the library that gets uploaded to NPM.
If they are packaging a software library, they should do it from the project's repository, not from one of its output artifacts.
They would probably reject a request if someone who was downstream of their work decided to repackage their stuff and asked them to include tests and other superfluous content on their packages.
I don't know about other distros, but Debian makes it extremely easy to download both the binary package and the source package. For instance, on the page for the jq package [1], you can download the source using the links down the right-hand side, which includes the full test suite. The key, in my view, is that Debian has a nice way to associate both the final output artefact and the source (both the original source and their patches) with a specific version.
The way it works for Debian packaging is that they usually have their own copy of the project's source code (what they call upstream). So the packaging process does start from the actual, original source code repo of the upstream project being packaged. This code is kept in a "upstream" branch, to which Debian-specific files and patches are added, usually in a different branch named "debian". For new versions, the "upstream" branch is updated with new code from the project.
All of which, if you ask me, is the correct way of doing any kind of packaging. Following that, IMO the same should be done for JavaScript libraries: the packaging should be done by cloning the project repo and adding whatever packager-specific files in there.
Notice in your link how in the upper part it says: [ Source: jq ], where "jq" is a link to the source description. In that page, the right hand section will now contain a link to the Debian copy repository where packaging is done:
You can clone and explore the branches of that repo.
(Maybe you are a Debian maintainer, in any case I'm writing this for whoever passes by and is curious about how I think JS or whatever else should be packaged if done properly)
The downstream distro maintainer is in the wrong IMO; if they want the source code, they can get the source code off of e.g. github and roll their own release.
That said, in old Java dependency management (i.e. Maven), you could often find a source file and a docs file alongside a compiled / binary release, so that you get the choice.
But this can also be done with NPM libs already; the package.json shipped in the distribution contains metadata, including the repository URL, which can be used to get the source.
I had that discussion with someone before. They were under the impression that the registry is rally permanent while GitHub could go away. Aside from the platform betting, they really thought their 2012 package (and particularly its tests) would be useful in 2052. But who am I to say otherwise.
The recent HN discussion about “that one npm maintainer” confirms please hold onto the most painful ideas.
My first thought was, "include a dev and prod version of the package" but that creates a ton of regression surface area for a feature that most people can't be bothered with anyway.
It's easy enough to have things work in pre-prod and fail in prod without running slightly different code between them.
I think there is a solution to this, but it's going to require that we change to something a lot more beefy than semver to define interface compatibility. Semver is a gentlemen's agreement to adhere to the Liskov Substitution Principle. We are none of us gentlemen, least of all when considered together.
Reminds me of the four types of documentation that sometimes get listed: tutorials, how-to guides, technical reference and explanation. (Usual caveat of all models are wrong but some are useful.) https://documentation.divio.com/
My (perhaps overly simplistic) take would be that we should take the thinking we use on the product itself (Who's going to use it? In what context? What would they already know? And so on), and apply and adapt it to the docs as we would any other product.
I strongly back the Divio system for documentation, it works great. But you should know that the creator of the system doesn't work at Divio anymore and the newest iteration is now called Diataxis https://diataxis.fr/
To expand on listening to your gut instinct: I find taking a "trust but verify" approach useful. Take the time to dig into what your instinct is telling you, try to match it up in words to your hiring criteria (which should go beyond technical stuff, including capturing whether or not you'd want to work with the person), and compare against other candidates to check you're being consistent. For instance, you don't want to unfairly penalise someone for being loquacious just because their interview was right before lunch and you were getting hungry, whereas you enjoyed interviewing the similarly talkative candidate that happened to be interviewed just after lunch.
Yes, unconscious bias is a huge risk in my “trust your got instinct” approach to hiring. For example, I bristle at the name “Daryl” because I went to school with someone with that name and I remember him laughing at me. Maybe this is why I’ve never hired someone called Daryl.
So I agree you need to articulate and challenge your reasons, even if only to yourself. Periodically review the ones you’re rejecting and ask others if there’s a pattern, then deliberately spend time with people represented by that pattern.
Unfortunately, this doesn’t come naturally to me and I have to work hard at it.
I GM an online TTRPG, and I wanted to replicate the experience of the players drawing the map themselves as they go along. We use Roll20, but didn't find the tools particularly well suited to updating the map in the moment.
So, I had a go at making a little tool that lets you quickly make rough sketches of the map, as well letting you move tokens (for the characters) around. It's not particularly fancy, but it seems to work for us!
1. I think it's worth considering how much the Baumol effect is responsible for the price changes described. Specifically: we'd expect those industries that don't benefit from improved productivity (for instance, because AI doesn't actually meaningfully help with most parts of the work) to experience price rises, since they're competing in the labour market with other sectors that have have benefited from improved productivity, and can therefore pay workers more.
Or, to put it another way, you'd expect this effect to some degree in a market operating entirely without restriction.
2. If I remember correctly, in the past, when technological innovation eliminated jobs, the new ones that sprang up generally meant a rise in wages across the board. In other words, the overall effect was low-paid jobs being replaced with higher-paid jobs.
No more. In recent years, the graph of wages of replacement jobs looks more like a U-shape: they tend to either be low-paid or high-paid. For instance, take translating a textbook. A company might decide to fire their moderately well-paid translators with a steady job, and replace them with AI (built by another company, using a smaller number (per book) of high-paid tech workers), plus some low-paid, proof readers to fix up the mistakes. And the latter group is probably precariously employed: the better the AI gets, the less they're needed. And they're probably seen as lower skilled and therefore easier to replace by the company, so perhaps their jobs look more like precarious gig economy jobs.
That's a simplification, but I think the evidence suggests that that U-shape is a pattern that, very broadly, holds true. So, we might not be unemployed, but we might have a lot of people with falling incomes and much lower job security.
(Apologies, writing this in a hurry, so can't find a source right now!)
An app for quickly and collaboratively drawing maps for tabletop RPGs.
I run a tabletop RPG for some friends over the Internet using Roll20. As a player in other (in-person) games, there have been times where we've collaboratively made a map as we've gone along rather than the GM providing one, and I wanted to be able to provide a similar experience for my players. Since we found Roll20 didn't really work for this use case, I'm cobbling together an app that tries to make the experience as fluid as possible. It's only really intended for my group when I'll be on hand to explain how it works and I'll be the only one deploying it, so the docs are somewhat sparse, but in case anyone is interested:
I maintain a library with ports to multiple languages (JavaScript, Python, Java). They have very similar structure, which means doing the same thing in pretty much the same way three times each time I make a change.
The idea I wanted to test with my language is: is it possible to extract a common subset that compiles into reasonably idiomatic code for those target languages? The compiled interfaces should be sensible (i.e. use of the code from the target language should be as good as if written in the target language directly), while implementations can be a little less tidy, but ultimately still readable and easily refactorable if the user ever decides to eject from my language and write everything in the target language(s) instead.
I doubt I'll ever use it in anger, and since it's nowhere near ready for use of any kind there aren't really any docs. In the unlikely event someone is interested, the most illuminating thing to look at would be the very beginnings of the reimplementation of the aforementioned library. Since I use snapshot testing with examples, you can see the source code, generated code and result of running the compiled test suite in one file: