Hacker News .hnnew | past | comments | ask | show | jobs | submit | binaryturtle's commentslogin

I use Subversion w/o a server too. You can have your repositories locally (file:///Path/to/repository). All my own (single man) projects are in local SVN repositories. For my use case git is just too much extra friction, and I still love to have my one single unique global revisions number that is linearly increasing with each commit. :)

I loved the simple increasing numbers of Subversion. This was better then CVS "ad-hoc" versioning and also far better then git's hashes. Single numbers are easy for humans. I would love git would make it possible to work that way (there is one way: `git describe` can show something like "v1.0.4-14-g2414721" which means "14 commits ahead of tag v1.0.4".)

Some form of "increment authority" that simply appends to a list of hashes whenever it encounters a commit not yet on the list? Then you could use URI like $authorityhost/orderedcommits/$number as synonyms for the hashes. Multiple increment authorities would not necessarily have them in the exact same order (and "current latest" would likely differ by an order of magnitude or two after some time if you ever had multiple authorities), but it would still provide a lot of intuitive understanding.

I wonder how the tag mechanism would perform if you just burned it with this content. I suspect that it would not perform well...


Mercurial has serial numbers which work like this ("revision numbers"), but you can only see your local repo's serial number. There's no concept of a public revision number authority.

My release code uses `git rev-list --count $tag` to output the release number.

Ironically, GitHub became popular because it gave everyone a centralized repo—exactly the thing people liked about SVN.

It's so weird. Git's major differentiating feature is that its distributed nature means you don't need a centralized server, and your work isn't blocked if that (unneeded) centralized server goes down. Yet, so many developers have twisted git around to make it a centralized single point of failure--on purpose! I don't understand it.

Git still works offline when github is down. Try committing or getting history when the svn server is down, let alone pushing to a different remote. The things github centralizes are the things that aren't actually git itself.

It used to be possible to use GitHub via svn too (but I think they removed that feature a while ago.)

Yes but the .svn dirs everywhere keep being annoying

Since Subversion 1.7 (2011), working copies only use a single .svn directory at the top level.

Amazing! I remember copying over a project and failing to clean all the .svn folders... Messed up 2 projects at a price of one :D

git rev-list --count HEAD

This is just sad. Luckily I do not use any of the listed programs. I threw out Homebrew many years ago when they started this nonsense.

The only tool I have installed currently that does %/"($& like this is Deno (required for yt-dlp now). It phones happily home even if you wrap it into a wrapper script that forces the env variable (in no way I'll pollute my default environment with stuff like this):

    $ cat /usr/local/bin/deno
    #!/bin/sh
    exec env DENO_NO_UPDATE_CHECK=1 /usr/local/packages/deno/latest/bin/deno "$@"

I wish bad dreams to whoever puts such crap into their software! Thankfully I have Little Snitch to catch most of those kind of invasions of my privacy.

No, "grass always looks greener on the other side" is a perspective thing. If you stand on your own grass then you look down onto it and see the dirt, but if you look over to the other side you see the gras from the side which makes it look more dense and hides the dirt. But it's the same boring grass everywhere. :)

hah, i have been arguing this for years. first time to see someone else making the same argument. nice!

I preferred GPs poop joke version but to each their own.

At first, I thought "this is missing the point of the phrase" and moved on, but now I'm back to say it's stuck in my head and an intuitive, pretty neat way to think about it.

I have a simple rule: I won't pay for that stuff. First they steal all my work to feed into those models, afterwards I shall pay for it? No way!

I use AI, but only what is free-of-charge, and if that doesn't cut it, I just do it like in the good old times, by using my own brain.


I used GitHub's Copilot once and let it check one of my repositories for security issues. It found countless (like 30 or 40 or so for a single PHP file of some ~400 lines). Some even sounded reasonable enough, so I had a closer look, just to make sure. In the end none of it was an issue at all. In some cases it invented problems which would have forced to add wild workaround code around simple calls into the PHP standard library. And that was the only time I wasted my time with that. :D

Wouldn't telemetry solve this problem automatically? I mean: they should get some signal back when people opt-out no? :)

How about computers to have replaceable SSDs? There's no point you can exchange the battery when the hard-soldered SSD dies first. (I had more dead SSDs than batteries)


This should be mandatory, although I never had a computer where the SSD was not replaceable.

Some were a bit of a pain in the ass to replace though.


At least there's a choice there. I've never bought a computer with a soldered-on SSD.


And get rid of soldered RAM while we're at it as well.


Once I played a similar prank to a computer science teacher. Back in the Windows 3.x for Workgroups era this was. I made a screenshot of the desktop (showing a window), and put it on as wallpaper. Took the man a little while to figure out why that window couldn't be closed (after a hard reboot later when the window popped back up :) )


It's probably for "agents" that want to make websites for other agents. This has nothing to do with us humanoids.


I'm getting a lot, and I mean A LOT, spam recently from various "<IP in reverse notation>.bc.googleusercontent.com" domains. Not sure what can be done about that. But the uptick is very noticeable.


Depends on the mail server. I'd probably 5xx all mail from googleusercontent.com as I don't give a toss if something Google breaks, and could debug what happened from the mail server logs. Google's incompetence in marking all the OpenBSD mailing list traffic as spam is why I'm running my own MX. If you have actual customers on your mail services you should audit the logs, see if anyone is actually using Google for something legit (usually it's the spam, I mean, marketing department being their usual sleazy selves), maybe flag the messages as potential spam by default. If you do have users doing something wacky with googleusercontent.com (email notifications from batch jobs, or something?) there are other ways those notifications could be done, e.g. over a VPN or via some other service that would allow all googleusercontent.com to be blocked by default from doing SMTP, ideally at the firewall level so less CPU is wasted on them. Complications here are that people forget or leave and so there might be some wacky workflow that uses Google running on some walled off server somewhere, so it may be a months long "slow simmer" to see if there is anything legit hiding in the noise. Or you could yank the band-aid off and see what breaks?


Yup, same. I'm blocking bc.googleusercontent.com and also firebaseapp.com for now. The reverse DNS should also be able to be used, as the fakey spam domains don't match up with the PTR record, but I want to wait until I can watch the logs for a bit to make sure that works nicely.

e.g.

PTR: 53.220.83.34.bc.googleusercontent.com

HELO: sax.co.uk


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: