Hacker News new | past | comments | ask | show | jobs | submit login

I think the author misdiagnoses the problem, and the proposed solution simply hides the centralization instead of removing it.

The reason AWS is expensive is not because of IPv4, or the datacenters. It's mostly in their software/managed offerings, and the ability to quickly add more servers. If you are a "serious company" and you don't want to pay AWS or a similar company, renting a rack and colocating your own servers (either within your premises or in a datacenter) is doable and done by lots of companies.

I disagree that certificates have caused centralization, and they're not something separating the haves and have-nots and are in no way comparable to having or not a mainframe. HTTPS becoming pseudo-mandatory didn't push people into having their own (sub)domains, which is nowadays the only requirement to obtain a certificate. It already happened out of convenience.

The other point of centralization mentioned is DNS, which tailscale doesn't avoid at all. MagicDNS still relies on the ICANN root, as does the tailscale control plane. And if all you wanted was a free subdomain, there are plenty of people offering that.

If you are behind CGNAT, tailnets aren't particularly less centralized, as traffic has to flow through the DERP servers. I doubt tailscale can keep providing these free of charge when the volume is in the tbps instead of the gbps.

I agree that tailscale (and similar solutions) help in the last remaining case, which is accessing your computer that is behind a NAT. I even think they could reach the dozens of millions of users. This is, in my opinion, not enough to claim the title of "the new internet".




This rests on the incorrect assumption, pointed out in the post, that most applications need the kind of scale that warrants quickly scaling to more servers.


The article tried to make k8s all about scale, when that's not the whole story.

On other socials, a screenshot of the 'Not scaling' section is getting responses of "Those idiot developers think they need k8s scaling for their 1 req/s sites, ha ha."

The author brags about being able to (skip testing, CI/CD pipelines and just) edit their perl scripts (in prod,) really quickly.

What uptime is associated with that practice? As many 9's as it takes for Brad to debug his perl program in prod? This approach doesn't even scale to 2 developers unless they're sitting in the same room.

DevOps isn't a machine where you put unnecessary complexity in one end and get req/s out the other end. It's about risk and managing deployments better.

If I really wanted to engineer for req/s, I'd look at moving off k8s and onto bare metal.


In an enterprise environment, I'd like a networking solution that allows me to run an app on my own office workstation and expose it as a service to some part of the company at an SLO that can be reasonably be guaranteed with a workstation: 99.9%. That would allow to cut so much time in "productionizing" stuff that doesn't need CI/CD pipelines or deploy to a datacenter: just me editing a Python file and restarting it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: