It's not pointless at all. For long-term maintenance, the community strength and level of active development is just as important (and sometimes more important) than minor feature differences.
I would often rather use a decade-old project that was developed solo by a world-class expert dumping code over the wall once every 6 months than a community project being hacked on by 100 amateurs.
Without additional context I find recency of last commit and number of committers to be almost impossible to draw useful conclusions from.
I wish this both-lazy-and-condescending missing-the-point hand-wavy-analogy argument style would die already.
The earlier poster in this thread implied that number of contributors and recency of commits in one of two competing github projects was evidence that it was better.
My point is that these are inadequate (often totally misleading) heuristics unless both projects are otherwise extremely similar, which they usually are not, and even then are usually not very useful heuristics compared to other ways of comparing the projects.
Unless you know who the authors are, what the project management/organization style is, how the project is funded / what level of commitment the authors have, what the project release cycle is like, etc., or unless you directly examine the code yourself, the only thing that looking at the most recent git commit tells you is how recently someone published public code changes. Which is not something that anyone evaluating two projects cares about directly, but only as some heuristic signal of other features that might be more costly to examine.
But note that commit recency doesn’t give a remotely useful sense of how extensible the project is, how readable or efficient the code is, how well designed the API is, how good the documentation is, how friendly the community is, how competent the project management is, .....
If we want to make a car analogy, it’s like choosing which car to buy based on how frequently the company introduces new models, or how many engineers they employ, rather than based on customer reviews, reliability estimates, accessibility of mechanics, gas mileage, top speed, or storage capacity.
Your argument is basically analogous to: “because the average car with frequent updates is better than the average car with infrequent model updates, criticizing that as a primary criterion for choosing a car is an invalid argument”. Notice that you haven’t even bothered to examine whether your premise about the relation between updates and quality is true, or whether that average relationship makes update frequency a practically useful heuristic or not.
You are arguing a red herring. I never posited that update frequency is the only useful metric, or that you shouldn't consider how well the library itself works. You should consider all the aspects that are relevant to your use cases...and often this should include the community strength, along with the intrinsic library design, funding, documentation, etc. Certainly you shouldn't stop at the library design and code itself, that is just one of many considerations to weigh when adopting a dependency.
In the absence of any other information, the more recently updated codebase is preferred to the least recently updated, for the same reason that an abandoned codebase is dispreferable.
Alternative equally speculative conclusion from the same data: the very-in-flux code is so shoddy that there are constant security bugs needing weekly fixes, whereas the stable and relatively inactive code is so rock solid that nobody ever needs to touch it for it to keep working.
For example, how often does DJB publish new code changes to his various projects?
You are right, but we don't the proficiency of the developers. The best info we have is the repository update info. That's what you have to quickly compare the projects.