Hacker News new | past | comments | ask | show | jobs | submit login
The New Internet (tailscale.com)
517 points by ingve 43 days ago | hide | past | favorite | 305 comments



The eternal problem with companies like Tailscale (and Cloudflare, Google, etc. etc.) is that, by solving a problem with the modern internet which the internet should have been designed to solve by itself, like simple end-to-end secure connectivity, Tailscale becomes incentivized to keep the problem. What the internet would need is something like IPv6 with automatic encryption via IPsec, with PKI provided by DNSSEC. But Tailscale has every incentive to prevent such things to be widely and compatibly implemented, because it would destroy their business. Their whole business depends on the problem persisting.

(Repost of <https://news.ycombinator.com/item?id=38570370>)


This sounds like a reasonable point, but the more I think about it, the more it sounds like digital flagellation.

IPv6 was released in 1998. It had been 21 (!) years since the release of IPv6 and still what you're describing had not been implemented when Tailscale was released in 2019. Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

It's easy to paint companies as bad actors, especially since they often are, but Google, Cloudflare and Tailscale all became what they are for a reason: they solved a real problem, so people gave them money, or whatever is money-equivalent, like personal data.

If your argument is inverted, it's a kind of inverse accelerationism (decelerationism?) whereby only in making the Internet worse for everyone, the really good solutions can see the light. I don't buy it.

Tailscale is not the reason we're not seeing what you're describing, the immense work involved in creating it is why, and it's only when that immense amount of work becomes slightly less immense that any solution at all emerges. Tailscale for example would probably not exist if they had to invent Wireguard, and the fact that Tailscale now exists has led to Headscale existing, creating yet another springboard in a line of springboards to create "something" like what you describe -- for those willing to put in the time.


> Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

The folks who either (a) got in early on the IPv4 address land rush (especially the Western developed countries), or (b) with buckets of money who buy addresses.

If you're India, there probably weren't enough IPv4 address in the first place to handle your population, so you're doing IPv6:

* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

Or even if you're in the West, if you're poor (a community Native American ISP):

> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.

* https://community.roku.com/t5/Features-settings-updates/It-s...

* Discussion: https://news.ycombinator.com/item?id=35047624

IPv4 'wasn't a problem' because the megacorps who generally run things where I'm guessing you're from (the West) were able to solve it in other means… until they can't. T-Mobile US has 120M and a few years ago it turns out that money couldn't solve IPv4-only anymore so they went to IPv6:

* https://www.youtube.com/watch?v=QGbxCKAqNUE

IPv6 is not taking off because IPv4 (and NAT/STUN/TURN) is 'better', but rather because (a) inertial, and (b) it 'works' (with enough kludges thrown at it).


There is another reason: the addresses are long and impossible to remember and hard to type.

I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.

Even “small” usability differences can have a huge effect on adoption.


Also, NAT is desirable for security/network isolation reasons, and having no distinction between a local IP and a public IP has a lot of disadvantages.

Yes, there are ways to configure IPv6 to isolate subnets, separate local traffic from internet traffic, set up firewalls and DMZs, run local DNS, etc., but they're all more complicated to configure and administer than their IPv4 equivalents.


> NAT is desirable for security/network isolation reasons

For the love of expletive this mistaken belief needs to have died yesterday. NAT boxes help primarily because they also contain a firewall. But most of 2024's network security problems originate from the devices behind your firewall getting exploited through their on requests, not some random shit connecting from the outside. (Yes, that does still happen, so you keep your firewall.)

> no distinction between a local IP and a public IP

https://en.wikipedia.org/wiki/Unique_local_address


>But most of 2024's network security problems originate from the devices behind your firewall getting exploited through their on requests, not some random shit connecting from the outside.

That is Survivor Bias at its best.

The originate _inside_ because NAT effectively blocks all _external_ requests.


> The originate _inside_ because NAT effectively blocks all _external_ requests.

You mean the firewall effectively blocks all external requests.


You are technically correct that iptables is what provides the NAT functionality on Linux (by way of the MASQUERADE target), which many routers run. However, you are very incorrect that the firewall is directly involved in blocking the request.

The reason NAT works for this is because by default there are no Internet-accessible services available via the router. If a request is received by the router that doesn't match an open port, its OS will, by default, reject it, with no firewall required.


So then what’s this default firewall rule I have that blocks all non-established connections?

NAT is not required for any of the things you’re talking about.


okay now I'm curious

what happens with an incoming packet if there are no firewall rules on the NAT gateway/middlebox? without having a corresponding conntrack entry they will be dropped (and maybe even an ICMP message sent back, depending on the protocol), no?

for example if there is an incoming TCP packet with a 4-tuple (src ip, src port, dst ip, dst port) ... by necessity "dst ip" is the public IP of the NAT box, and on a pure NAT box there are no bound listening sockets. so whatever "dst port" is .. unless it gets picked up by an established NAT flow ... it will splash on the wall and getting a TCP RST.

isn't the argument that "NAT is not required", but that "NAT is implicitly a firewall"?


> what happens with an incoming packet if there are no firewall rules on the NAT gateway/middlebox?

See perhaps stateless NAT:

* https://wiki.nftables.org/wiki-nftables/index.php/Performing...


On a directly connected outside system, you can set a route for your LAN address space via that router and it will just work. It requires telco or physical access but I have in fact done this before.


>what happens with an incoming packet if there are no firewall rules on the NAT gateway/middlebox?

You get a Full Cone NAT. Once the middlebox maps an (internal IP, port) tuple to an external port, every connection to that external port would lead to that internal tuple.

Why should Host C be able to reach Host A, when Host A is only speaking to Host B?

I am sure you know this but still, I have to stress that NAT is merely a mapper from one tuple to another tuple. If your router can handle NAT it certainly can handle an IPv6 firewall. And modern home/SOHO routers come with IPv6 firewall enabled by default (for the non-home routers, you have a bigger issue if your networking guys are not checking whether firewall is active) so I find the firewall discussions utterly as meaningless as someone fearing their DHCP server is not turned on by default. And frankly speaking, it's just an excuse for not implementing IPv6 -- saying that your ISP doesn't provide IPv6 connectivity would have been more convincing.


> If your router can handle NAT it certainly can handle an IPv6 firewall.

The point is not that "it can", the point is that on ipv4 "it doesn't work without".

In order for ipv4 to work at all you MUST use NAT, and implicitly a firewall, those two always work together even if there the person installing the system doesn't know the word "firewall", which is usually the case.


> The point is not that "it can", the point is that on ipv4 "it doesn't work without".

Hmm. I hadn't seen this brought up and I think it's a stronger argument than most others.

The IPv6 equivalent is services on ULA only, but that's not a default behaviour.


Thanks, I wasn't familiar with this term!

I think you misunderstand my post. My "philosophical inquiry" is about trying to get to the bottom of this, and it seem to me that NAT, as virtually everywhere deployed and found in the unspeakably many SoHo setups, is a stateful NAT, and it's implicitly a bad firewall.

So when people say that this is "meme" should die .... well they are right, but not technically right, no?


Regardless, it's a fair point. Most of the attack surface on client / end user boxes these days is through social engineering and end user stupidity. Vanishingly little of it on modern OSes comes from external sources like a scan revealing a mistakenly open port. It's just that the threat profile has shifted toward making users make mistakes to the point where so much resource is thrown at fooling users now that, by the numbers and the ransomware profits, it's more effective than trying to penetrate software remotely.


That is because most systems comes with a firewall on and fairly limited surface area in the form of exposed services.

But, there are billions of other devices (IoT etc). that barely has any security protections in place that rely completely on not being exposed to the outside world.


> But, there are billions of other devices (IoT etc). that barely has any security protections in place that rely completely on not being exposed to the outside world.

Yes. And you can not-expose them via default deny firewall rule.

My home printer had an IPv6 in a prefix assigned from my ISP, but it was not accessible to the Internet (it was actually ping6-able because my Asus allowed ICMPv6 by default, but I could not connect to its web interface, like I can internally). Neither could I SSH into my macOS desktop or laptop from the outside (but could between the two internally).

And even if my globally addressable devices were globally reachable (which they were not), good luck scanning a /64.


I know. But this old NAT vs. firewall crap was pointless decades ago.

Still is.


Amen! In the 2000s I had a newly setup windows xp with the modem on the internet. After 30 minutes it was toast.

Riddled with viruses.


I had the same, except I was not running it as admin, so nothing could touch system32 or the boot sector.


> Also, NAT is desirable for security/network isolation reasons […]

This is security theatre. People have been saying that NAT is not a security feature for over a decade:

* https://blog.ipspace.net/2011/12/is-nat-security-feature/

but the message still has not sunk in. The "Zero Trust" paper was published by John Kindervag in 2010:

* https://media.paloaltonetworks.com/documents/Forrester-No-Mo...

Most modern attacks start from a compromised internal host (e.g., from phishing), or through stolen credentials via a remote access method. The above is "castle-and-moat" thinking that tends to have weaker internal controls because it is thought the internal network is "hidden" from the dangerous outside network.

Set your firewall to default deny, then add a rule for allow outgoing connections, followed by only allow incoming connections if they are replies. For most machines (and networks), most of the time, this is what's needed: the above is applicable for both IPv6 and IPv4 (with or without NAT).

The protection comes from filtering (generally) and stateful packet inspection, not from hiding addresses.

> […] and having no distinction between a local IP and a public IP has a lot of disadvantages.

Just because something has a global addresses does not mean global reachability (see default deny above). Further you can layout your IPv6 address plan so that you can tell at a glance if hosts are externally accessible. Using a /48 a basis, you break out sixteen /52s, numbered $PREFIX:[0-f]000::/52.

To make it easier to remember what is externally accessible, you put all of those hosts in $PREFIX:e000::/52, where e stands for external. That /52 can then be broken down into:

* sixteen /56s

* 256 /60s

* 4096 /64s

or any combination thereof. See Figure A-5 for various ways to slice and dice:

* https://www.oreilly.com/library/view/ipv6-address-planning/9...

Everything in $PREFIX:[0-d,f]000::/52 is not externally reachable.


>https://blog.ipspace.net/2011/12/is-nat-security-feature/ >>Basic NAT (as defined in RFC 2663) performs just the IP address translation (one inside host to one IP address in the NAT pool). The moment the inside host starts a session through the NAT, it becomes fully exposed to the outside world.

This is a lie. A "session through the NAT" does not really expose the host to the outside world, because in 99% of the cases this is a TCP session, and the NAT machine would drop all "out of order" packets.

>Most modern attacks start from a compromised internal host (e.g., from phishing), or through stolen credentials via a remote access method.

Your statement is a perfect example of https://en.wikipedia.org/wiki/Survivorship_bias.

Most modern attacks start from an internal host exactly because NAT makes external attacks infeasible for the majority of scenarios.

>Set your firewall to default deny, then add a rule for allow outgoing connections, followed by only allow incoming connections if they are replies.

What about I don't do it, and the system is still _automatically_ secure, because NAT does exactly that while being _required_ for the system to work.

>See Figure A-5

LOL. What about I don't see any figures, and the system still works and is secure for the 99% of the cases.


> This is a lie. A "session through the NAT" does not really expose the host to the outside world, because in 99% of the cases this is a TCP session, and the NAT machine would drop all "out of order" packets.

No, it's not. NAT only translates addresses and does not inspect the TCP "internals" (like sequence number etc, which would allow it to block certain packets).

What you are describing is a stateful firewall that allows "reply packets" for an established TCP-session.


>No, it's not. NAT only translates addresses and does not inspect the TCP "internals" (like sequence number etc, which would allow it to block certain packets).

Yes it is. How would it forward response packets back if it doesn't track connections?

In real life I haven't seen "stateless NAT" for about 20 years.

But cgnat machines usually go beyond that and even verify sequence numbers.


> Most modern attacks start from an internal host exactly because NAT makes external attacks infeasible for the majority of scenarios.

Or, you know, because firewalls block stuff.

I've had hosts with public IPv4 addresses attacked on (e.g.) tcp/80 and tcp/443 because that's what the firewall allowed through so the web service was available to the public. I've had hosts with internal IPv4 addresses attacked on web ports because they were behind a (reverse proxy) load balancer for serving traffic: the fact that they had a 10/8 and were behind a NAT did not protect them from attack.

Before recently switching ISPs, my last one had IPv6 (new one does not). They activated IPv6 at some point, and I enabled it on my Asus, and suddenly all my internal devices got an IPv6 address (via RA), including things like my printer.

I had SSH enabled on my macOS laptop and desktop, but could not SSH into them from an outside source. My printer has a web interface on port 80 that I could connect to internally, but not externally. Even though all the devices had IPv6 addresses.

Just because a device is globally addressable does not mean it is globally reachable.

> What about I don't do it, and the system is still _automatically_ secure, because NAT does exactly that while being _required_ for the system to work.

Because NAT is doing that I describe, so you are doing it. The firewall is checking state on incoming packets and rejecting those that are not in its state table. The firewall is also coïncidentally just happening to also be fiddling some bits in the address field.

It is the stateful inspection that is protecting you.

* https://en.wikipedia.org/wiki/Stateful_firewall


You have a mix of accurate mix with not so much here.

> I've had hosts with internal IPv4 addresses attacked on web ports because they were behind a (reverse proxy) load balancer for serving traffic: the fact that they had a 10/8 and were behind a NAT did not protect them from attack.

You explicitly set up a NAT bypass (reverse proxy) and then claim NAT didn't protect them. If I am an external attacker coming in towards a single public IP where the backside hasn't set up UPNP/Port Forwarding/STUN/Reverse Proxy, NAT does exactly what the previous poster said. It drops packets because the 'destination' is the router itself in the packet. It has no where else to go, it has literally reached its destination.

A stateful firewall is in no way necessary for this functionality to exist. Even UDP stateless packets cannot bypass the NAT because there if there is no table tracking the conversation from the POV of the inside->out initiating the conversation because the router would have zero idea which interior host to forward the packet to and no reason to do so.


Modern ransomware attacks has demonstrated beyond doubt the very harmful belief that private network behind nat is an acceptable alternative to keeping systems secured and patched. The only thing nat does in a security sense is to provide false sense of security until the day a single machine inside get compromised and then the whole hospital or company comes to a standstill while rushing to restore from backup, praying that they do keep backups. Its a mistake, and the only reason it felt like a good idea was because Microsoft with windows 98/2k/xp got hit en-mass with worms targeting vulnerable windows machines that never got updates.


> Yes, there are ways to configure IPv6 to isolate subnets, separate local traffic from internet traffic, set up firewalls and DMZs, run local DNS, etc., but they're all more complicated to configure and administer than their IPv4 equivalents.

Eh, I think that has hindsight bias. Setting up NAT manually, or customizing how things are NATed beyond the typical "one or two subnets/IP ranges behind a NAT gateway and maybe a DMZ" you see in businesses and residences is quite complicated! It's just that our control planes are really optimized to make that common case very easy. From router web UIs to pf presets to Windows'/NetworkManager's "share network" functionality to what articles/how-tos are available, that complexity is very effectively hidden but not removed.

As IPv6 becomes more entrenched (and more sites move to IPv6-only or public-IPv6-only deployments), the same thing that happened for the IPv4 world will happen for network segmentation configuration in the IPv6 world: it will get a lot easier and common defaults/conventions will emerge. I don't think the inherent complexity differences between IPv4 and IPv6 are that relevant here.


In practice I haven’t ever had a problem memorising IPv6 addresses. The significant proportions of any address you might type manually are 48 bits long at one end and a few bits at the other.

An example IPv4 address is 8 to 12 digits:

  10.30.115.5
A memorable IPv6 address at a /56 site — the prefix and then one or two digits — isn’t much longer:

  2001:db8:404:14::42
If you’re with a reasonably clued in ISP you probably get a /48 for your site by default:

  2001:db8:404::42
If you’re enumerating your own /64 prefixes then it’s not much more complicated than:

  site 2001:db8:404::
  net              :14::
  host                ::42


This is the first time I read about someone actually trying to remember IP6 addresses, maybe I should try that, because it’s really easy to remember IP4. For me, the problem is that there’s hex numbers, which are harder to remember and missing zeros, so you need remember the colons. If IP6 would just be 6 decimal numbers and this would be the default way of writing them, this would not be a problem. But it feels to me that the cryptic way IP6 is written is to make it hard for humans to remember it.


>the prefix and then one or two digits — isn’t much longer

I'd argue it's just enough to make the difference though.

The problem is that people got used to being able to rely on memorizing IP addresses. IPv6 does its best at making IP addresses both harder to memorize, and completely dynamic, going so far as to change the IP on a fairly regular basis. It's antithetical to some very core qualities that an IP address is supposed to have in the minds of many.


That’s because nobody normal— anyone who isn’t a tech person— remembers IP addresses.

Hell I can’t get tech people I work with to give me their public IP.


Maybe young people can’t, but older folks can easily store phone numbers in their brains, so IP4 is easy.


> There is another reason: the addresses are long and impossible to remember and hard to type.

If only there was some mechanism in which we could use a human-friendly label and have that translated to a computer-usable address…

> I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.

I don't bother remembering IPv4 addresses, so I'm not sure why I would bother to remember IPv6 addresses. Heck, phone numbers are generally short as well, and who remembers them nowadays? ("0118 999 881 999 119 725… 3")

Maybe it's dismissed because people see it as a non-issue. I regularly work at OSI Layer 2 (and even 1, pulling fibres in a DC), and Layer 3, and am not sure what the concerns are about.


The problem is that DNS is not zero configuration. ARP and NDP are which is why nobody complains about Ethernet being hard to type. DNS has to be “stood up” which is a whole extra deployment.

In modern devops in particular it is common to create and tear down IP networks in seconds and sling stuff everywhere. The extra moving part is an extra thing to break.

DNS also runs over IP which means if IP is down DNS doesn’t work. What do you have to do then? You have to debug IP without DNS.

There is mDNS but it’s not reliable and doesn’t scale to large networks. It also runs on the IP layer so if there is a problem there it can break.


> The problem is that DNS is not zero configuration.

Certainly it is not-zeroconf, but it is the same not-zeroconf for both IPv4 and IPv6.

But extra work with DHCP is needed for IPv4, and extra-extra work if you need to do things like configure 'IP helper', whereas IPv6 can be configured using only a router (which you need regardless) and some on-link packets (RAs).

> DNS also runs over IP which means if IP is down DNS doesn’t work. What do you have to do then? You have to debug IP without DNS.

And? At least you have fe80/64 as a basic starting point. Run a tcpdump to see if you're on-link in any way (or in the correct VLAN), and if you are, you can then ping(6) ff02::2 to find if there any on-link routers. You've now debugged Layer 2 and Layer 3 connectivity. Tada.

You're making IPv6 (sound) way more complicated than it is. It is no more or less complicated than IPv4 or IPX/SPX or …. It's protocol data units at OSI Layer 2 or 3 in different formats with different fields.


> who remembers them nowadays? 0118 999 881 999 119 725… 3

Me, that’s the new number for the emergency services.

Actually, that’s the only phone number I can remember :D


  Eight six seven five three oh nine
  Eight six seven five three oh nine


Thanks for pointing this out. It's hard to communicate ipv4 and I dread even reading ipv6.

I don't understand why they didn't just add two or four more fields to ipv4 e.g. 0.91.127.0.0.1 is localhost where 0.91 can be omitted in the local context.

PS: I don't understand how networking works. Feels very very complex and full of jargons.


Because the fields are there for humans, in the packet itself it’s a 32bit integer, and you can’t just arbitrarily make the src/dest fields in the packet bigger— it stops being IPv4 then.


I'm pretty sure the person you're replying to is saying that IPv6, should be IPv4 but longer, which is not at all an uncommon opinion, even if it's a breaking change to the IP protocol. And I'd argue there would've been incredibly strong benefits and much wider adoption if they did this. Sure, you'd still need new networking gear and software support to handle it, but the change is relatively simple (and potentially more easily backwards compatible), especially compared to all of the baggage that came with IPv6.

It's a fact of life that working with networking that we'll have to work with IP addresses at some level. It's easy to tell someone, "hey try typing in 'ping 8.8.8.8' and tell me what you get".

The readability of IPv6 is, in my opinion, worse with repeated symbols and more characters to remember. The symbols that were chosen were also poorly thought out. Colons are used in networking a lot of times when you want to connect to a service on a particular port, so if you want to visit 2001:4860:4860::8888 in your browser, you have to enclose the address in square brackets.


> The symbols that were chosen were also poorly thought out.

The wackiest example I've seen of this is the `ipv6-literal.net` notation for Windows UNC paths: https://devblogs.microsoft.com/oldnewthing/20100915-00/?p=12...


> I don't understand why they didn't just add two or four more fields to ipv4 e.g. 0.91.127.0.0.1 is localhost where 0.91 can be omitted in the local context.

Because they thought that 64-bits would not be enough, and did not want to have to go through yet another transition.

The IPng proposal that was chosen, SIPP, was originally 'only' 64-bits:

* https://datatracker.ietf.org/doc/html/rfc1752#section-9

See also §10.2:

* https://datatracker.ietf.org/doc/html/rfc1752#section-10.2

Specifically (§11.1):

    * scale - an address size of 128 bits easily meets the need to
      address 10**9 networks even in the face of the inherent
      inefficiency of address allocation for efficient routing


if ipv4 is called that because there are 4 numbers in the IP address, what would you call your scheme with 6 of them?


Yes, like Ethernet addresses. Those are impossible to remember, too, so obviously Ethernet is no good. /s

The solution for IPv6 addresses is the same for Ethernet addresses; don’t use them directly. Leave it to the name resolution system, and use host names.


> IPv6 was released in 1998. ... Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?

Well, for a long time, IPv6 didn't work very well. We're past that, mostly. Google reports that 45% of their incoming connections worldwide are IPv6.[1] Growth rate has been close to linear, at 4%/year, since 2015. IPv6 should pass 50% some time in 2025.

Mobile is already 70%-90% IPv6. They need a lot of addresses.

Most of the delay comes from enterprise networks. They have limited connectivity to the outside world, and much of that limiting involves some kind of address translation. So a "corporate IPv6 strategy" is required.

[1] https://www.google.com/intl/en/ipv6/statistics.html


It is a problem when, for instance, Google chooses not to implement SRV (and later HTTPS) DNS record support in their web browser. The problems which SRV (and now HTTPS) DNS records solves is not a problem for Google, since they solved the problem by sheer scale and brute force, and Google only benefits from everybody else still having the problem; it’s a great moat for them.


And, worse, incentivized to require users to use a "coordination server" which helps with the NAT and firewall traversal problem by being something you can reach from outbound-only clients. There's a lot of verbiage there, but the general idea seems to be that Tailscale sits at the middle of this as the means by which machines find each other.

There are other ways to do that.

There are dynamic DNS schemes, so you can give your machine which only has a temporary IP address a permanent name. That's been around for decades, and seems to have a bad reputation.

There are schemes with multiple coordination nodes that know about each other, and published lists of such nodes. The list may be out of date, but as long as the published list has one live node, you can connect and get updated. That's how Kademlia, which underlies Etherium's network and some file sharing systems, works. That's about 20 years old, and sort of has a sketchy reputation.

It's possible to go only halfway, and separate discovery from transmission. Peertube does that. You find a file to stream via ordinary HTTP to a server you find by ordinary web search means. Anybody can set up such a server. The actual streaming, for files wanted by many clients, is distributed, with people currently watching also sending out blocks to other people watching. This scales well, in case your video goes viral. It's not used much, though.

So it's definitely possible to do this without someone in the middle able to cut off your air supply.


How is trusting a dynamic DNS provider different than trusting Tailscale's coordination nodes?


Not everybody has to use the same dynamic DNS provider.


  Competition
  Jurisdiction
  Resilience
  Biodiversity


No, we definitely don't want "automatic IPSec" (especially IPSec!), or really any enforced encryption at the network level, even if it's something sane at this moment like WireGuard. Look at old VPN protocols or authentication schemes like RADIUS which have glaring security holes and are impossible to fix because of compatibility issues, and they're running at much smaller scales than the whole internet. Hell, the way the industry is solving TCP ossification problems is by throwing TCP away and reimplementing it on top of UDP, that should tell us something.


Your argument seems to be to never implement anything, because eventually it will become old and it will be hard to move away from it? This seems to be an argument against anything new, and it is therefore hard to take seriously.


It’s an argument against complexity. IP had amazing longevity because of its simplicity and openness.

Even if something is open, complexity is almost like closed as we can see with crazy complicated web standards for which there are few implementations.


Not GP, but I interpreted it as an argument to innovate/proliferate implementations early and often, but to standardize minimally and as late as possible.


The argument is more that encryption technologies can become outdated quickly. You also make it harder for small embedded devices to implement network connections if you mandate that all traffic must be encrypted.

A simple protocol is more likely to last.


IPSec is still widely deployed, securely even. All manual, of course, because it doesn't do the stuff that makes HTTPS easy.

IPSec is not always a VPN protocol. L2TP over IPSec is often used as one, but IPSec does little more than encrypt a tunnel between two IP addresses. IPSec in tunnel mode can be a minimal VPN, but it's not used as such as in VPN scenarios without a second packet encapsulation protocol, as it lacks authorization beyond key exchange.

As for the risk of ossification: that didn't go away with the current system either. HTTPS over TLS 1.3 looks like a TLS 1.2 session resumption on the wire (in its default configuration) because shitty middleboxes are used often enough that it would impede the protocol.

The "let's remake TCP over UDP" approach QUIC takes has very similar origins. UDP is generally allowed by random firewalls over that network, while other (more suited) protocols for this type of stuff like SCTP are not. The operating system doesn't allow opening raw network sockets without high privileges, so adding a new QUIC protocol at the layer of TCP and UDP to implement them at the right spot in the stack wouldn't be usable for many devices. Same is true for the TCP stack you have to use what the OS provides or get higher privileges to do your own; patching the TCP state machine isn't practical. So, if you want to implement a better version of TCP optimised for web browsing and such, you use UDP, because while technically incorrect, it'll work in most cases and has the least restrictions.

In the context of the network, IPSec is the new protocol here, not the result of ossification.


Zerotier does kind of that. It's a tunnel, but also the traffic is direct (unless double Nat is involved) and if you could route the traffic directly to the endpoint IPs, you can skip zt. The location service can be self-hosted if you want. You don't have to use them as a service if you don't want to. Apart from dnssec it's pretty much what you're asking for.


Double NAT is now almost everywhere in the world, except maybe USA.


What kind of Nat though? You can use upnp, predictable mapping, etc. and still allow the traffic through. And that's only with ipv4, because you can run zerotier over IPv6.


> What kind of Nat though? You can use upnp, predictable mapping, etc. and still allow the traffic through.

Your computer can talk to your home router (CPE) and punch a hole for a connection, but if your WAN port does not have a public IP address, but rather itself also has a private address (probably 100.64/10), the CPE cannot talk to the ISP's router to punch a hole:

* https://en.wikipedia.org/wiki/Carrier-grade_NAT

The two layers of NAT (home network (192.168) -> CPE NAT (100.64/10) -> ISP NAT ('real' public IPv4)) prevent hole punching.


Double Nat on one side is not that universal. Across Europe and Australia I've seen it maybe once on a residential connection. I'm sure it's used, but the comment about the US in the post above just doesn't match my experience.


Great for you for not having to experience it, but that doesn't mean it sucks any less for those less fortunate:

> Our [Native American] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.

> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.

* https://community.roku.com/t5/Features-settings-updates/It-s...

* Discussion: https://news.ycombinator.com/item?id=35047624


You can't over double NAT because the second layer of NAT is not going to support UPnP


The vast majority of CGNATs across the world don't support the PCP protocol for predictable mapping.


foreseeable yet still somewhat surprising that having a clean v4 address on the cpe has become a very privileged position.

just the other day i was discouraging a youngster from manually populating his hosts-file in order to circumvent a dmca-related dns block.... what has the world come to.


I think that is excessively negative take. Tailscales value proposition is also "you can connect to your network wherever you are, safely, and others cannot". That does not go away because of IPsec.


Network- and location-based security is ultimately unworkable. It’s like if you, in order to work, had to go to a ”virtual office” to even send mail to your colleagues. Mail, and related internet-enabled services, should be accessible from anywhere, and be secured at the end points, not at the network layer. (Most attacks are internal, anyway.)


Most people do need to be on a VPN or in an office to work. That's entirely normal, and makes sense even if you also require authentication for applications.


Why should you have access to the SSH host for my pie?

Or, more to the point, the server that I use to run my RSS feed reader?

Or my NAS?

Tailscale makes these more secure and more accessible for me. They are never meant to have the world access them.

Now for email and a few other things, sure, their nature is that they need to access the world.


> Why should you have access to the SSH host for my pie?

Because that is how the internet is meant to work. It is an end-to-end network. If SSH would not be secure enough to handle this, it would need a secure replacement.

> Or my NAS? […] They are never meant to have the world access them.

What is a NAS, if not a Network-Attached Storage, i.e. meant to be accessed from the network? The concept of a ”local”, ”secure” network is a dangerous illusion. Embrace ”zero trust” networking.


> Because that is how the internet is meant to work.

No. The "internet" is literally the "inter-network", a way to connect private networks between each other.

The fact that VPN technologies sit behind proprietary corporate intellectual property is not by design, it is a failure of the internet standardization process as it was gamed by corporate interests.


No, network- and location-based security is a necessary and indispensable layer in your security stack.

> should be accessible from anywhere, and be secured at the end points, not at the network layer

If you're not securing at both the network layer and the endpoints, then you have utterly failed security and you need to go sit in the dunce corner.


secured at the endpoints yes... I would argue you can go one step further, doing it at the application level. This is what we built (and open sourced) with OpenZiti (https://openziti.io/), the ability to embed an overlay network, built on zero trust and deny by default principles, directly into the app as part of the SDLC.

If you do this, your application has no listening ports on the WAN, LAN, or host OS network and thus cannot be attacked from the external network/IP.

The asymmetry of risk now favours the defender, not attacker. Oh, plus we also have pre-built tunnelers for endpoints if you cannot do app embedded.


Yggdrasil is p2p ipv6 e2ee. https://yggdrasil-network.github.io/

Last I checked, it hasn't solved DNS yet (there are unofficial projects trying to do that). I tested a small private network with a few devices and it worked very well.


The problem with TCP/IP is the lack of a standard and robust VPN/overlay network protocol. Everything we have is extremely fragmented and/or proprietary.

IPv6 is completely useless and doesn't solve this problem.

Normal people don't care if they have to pay 5 dollars instead of 50 cents to rent an IP address. This is a problem specific only to the huge providers, and we don't need to rollout a whole internet upgrade just to optimize a tiny part of the operational costs for huge providers.


We are trying to change that with OpenZiti - https://openziti.io/. Its an open source network overlay built with zero trust principles and deny by default in mind. We also built it for developers, so includes SDKs and other means to embed overlay networking directly into the SDLC.


It's a complex problem that hasn't even been formulated properly yet.

For example, every existing solution touts "security" and yet completely mangles the difference between authentication and encryption.

Authentication is important - you don't want random servers or users to enroll on your network, and you want good tools to rotate and manage secrets.

Encryption isn't important unless you care about state-level actors sniffing your traffic at the backbone. (And if you care about that then you already have your own datacenter.)

Meanwhile encrypting all network traffic is a huge performance penalty. (Orders of magnitude for some valid use cases.)


So far as I’m aware, TailScale has been at all times a good actor.

I have no problem criticizing tech companies, but I try to wait until they behave badly.


> TailScale has been at all times a good actor.

This is the Cloudflare problem all over again. One day Matthew Prince will get hit by a bus, all the "trustworthy people" will leave, a PE firm will take the company private, and merge it with an ad network. Congrats, the entire internet now has a single companies ads all over it and we let it happen because we happened to like the people fucking us.


Matthew Prince is definitely not a good actor, but that's not the point. What Cloudflare did was they acted like good people, said good things, even did some good things, but once they got enough business and momentum, they then started doing shadier and shadier things, and now they're a protection racket that is happy to protect scammers for a fee. I think Cloudflare's most ardent fans would have trouble articulating technically valid reasons for why it makes sense to re-centralize the Internet around them, yet that's exactly what Cloudflare want.

That's why people don't necessarily, and shouldn't, trust that Tailscale won't head down the same path. It's hard enough for non-profits - heck, the Mozilla Foundation is losing all the good will they've ever had, and even the Raspberry Pi Foundation decided to gaslight people when they started eyeing corporate money.

If there's an open source way to do a thing that's a pain in the ass and a way to do the same thing from a for-profit company, I'll take the pain in the ass thing every time. History has shown it to be the prudent thing time and time again.


Check out OpenZiti then - https://openziti.io/. Its Tailscale on steroids, with. (IMHO) a much more scalable implementation of zero trust principles.


Or https://netbird.io which is open-source. You can host the coordination server too :)


> I have no problem criticizing tech companies, but I try to wait until they behave badly.

I'd rather not wait until they have a (quasi-)monopoly on something though. Twitter was great until…


when was twitter ever great? It has been creating echo chambers from day one, and deliberately making discourse difficult and non-nuance. It's arguably the shittiest form of human communication, and that counts facebook also.


Wouldn't the point be they're an indecent, possibly bad, actor by default since they're a business at all rather than just creating or contributing to protocols/standards to resolve the issues their product relies on to exist? The only way they could be a good actor is if they're using the money from their sales to fund that initiative with a plan to obsolete themselves.

I suppose if you follow that thread though a lot of businesses just shouldn't exist except for fulfilling the need they fill for the sake of those in need.


Companies are allowed to solve problems for a profit. People can choose to sell their time and energy or give it away. The choice is the default.

In fact, I prefer that capitalist model at this point having seen countless OSS/nonprofit efforts turn into glorified abandonware.

At least the business has an interest in remaining a going concern and maintaining the stack.


BTW the best way to make standards happen is to sell a product based on the standard. Academic standards don't go anywhere.


I have no idea what the adoption is, but this reminds me of the really nice work the buf.build people are doing with ConnectRPC.

I have a SaaS-crush on buf because they did such a good job on fixing such an annoying problem.


I never thought of this. Forces me to rethink every negative post people made against DNSSEC which shaped my opinion. I still think that IPv6 and DNSSEC do more harm in practice than what they solve. Maybe the SCW podcast can do a deepdive on this together with somebody who is militantly-pro DNSSEC. <3 ...

edit: maybe even invite 2 or 3 DNSSEC advocates @tptacek :)


I don't think the analysis upthread should make you rethink DNSSEC, since it, too, is a centralized system; rather than being controlled by Avery Pennarun (you could do worse), it's controlled by an unholy alliance of world governments and companies like Verisign.

If we could find a credible DNSSEC advocate (for our audience; that is: a cryptography engineer, vulnerability researcher, or an engineering leader at a major firm), we would absolutely invite them on.

'teddyh below gave you links to two pro-DNSSEC resources; fun note: the latter source (Geoff Huston, one of the world's more respected networking researchers) has since then written this:

https://blog.apnic.net/2024/05/28/calling-time-on-dnssec/.


thanks much appreciated to you and teddyh for these links. really needed this opposite views.


The title of that article which you link to is “Calling time on DNSSEC?”, and Betteridge's law of headlines applies to it. Here’s the final paragraphs from that article:

I guess the question we should be asking is — if we want a secured namespace what aspects should we change about the way DNSSEC is used to make it simpler, faster, and more robust?


I'm happy just to see more people reading it. People can make their own call about it.



I may be in favour of DNSSEC, but I admit that it's time for a v2 of the RFC that removes the stupid "encryption can't be done at the endpoint" restriction. In practice you can just turn on validation on many computers and gain its benefits, especially if used in the manner as described here where you can just block connections to unprotected hostnames to work around the most glaring issue, but the whole spec is written for a world we've moved beyond.

IPv6 doesn't have that problem, though.


thanks!


> The eternal problem with companies like

it's not a problem specific to any kind of corporation or corporations per se, but organizations or even broader, solutions.

though, do you really think that having a solution to a problem is worse than just having the problem?


It is a problem if a company makes a lot of money ”solving” the problem, but:

• This does not really solve the problem, since a real solution would be to change the internet to make the problem go away

• A company making a lot of money gets to have an enormous influence on what is considered reasonable to standardize on. See for instance Google’s and Microsoft’s influence on things like the W3C. (Or if Tailscale is allowed to define what ”The New Internet” will be.)


seems indeed that microsoft is making lots of money by defending the status quo


All large incumbents defend the status quo, except when advocating for larger barriers to entry for new and smaller competitors.


Companies like you list solve people problems. Their business is about using abstraction to create customer experiences that match the market demand. I want to say that "institutions" solve computer science problems, but it's much more complicated than that.


Cloudflare sells bulletproof vests


[flagged]


This is a poor analogy. Historically there is a significant cost to making bad cars with frequent repair needs.


As somebody working in the modern automotive field I would like to disagree. The only incentive for car manufacturers is price point vs warranty period, after that you’re on your own. “Durability and “reliability” are no longer selling points for any automaker.


As someone not in the field, do you believe there is a true, objective, statistically valid way to tell which manufacturers (or which specific models) are more durable than others? All I see are consumer surveys, JD Power (more surveys), etc which, like most surveys, have wide error margins, and overall seem rather anecdote-based, and don't necessarily account for different stresses placed on the car based on driving style, diligence of maintenance, climate, etc.


this is a poor statement. the cost is not in dispute, but the bearer of it.

historically car owners need to pay for repairs.


Look at how the early 90s Ford Tempo resale value compared to Toyotas of the era. Trash cars don't keep their value. Toyota could then charge a premium because they were known for quality.


is resale value what the manufacturer wants? I mean they want to sell new cars after all...


Resale values do have an impact on new car prices. The better a vehicle holds its value the easier it is for the company to charge more for a new car.

Its also worth considering that, for better or worse, very few people actually own their cars today. When you have a loan on it the resale value becomes really important. If the manufacturer wants the kind of customer that buys a new car every few years they'll need resale value that at least keeps up with the principle on the loan over that time.


> is resale value what the manufacturer wants? I mean they want to sell new cars after all...

They have a higher resale value because they have a reputation of lasting a long time, and people are thus perhaps more willing to pay a higher initial purchase price because they know their "investment" will last longer.

And while they may not be planing to sell their car after only a few years, knowing that they'll get back more of their "investment" is also probably sitting in the back of their mind ('just in case').


Ie. A form of perverse incentive or the cobra effect. Endemic to capitalism, especially in infrastructure.


An incredibly long ramp up to complaining about centralised control by rent seekers (a very reasonable complaint!) which gets bogged down in some ostensibly unrelated shade about whether client-server computing makes sense (it does) or is itself somehow responsible for the rent seeking (it isn't; you can seek rent on proprietary peer to peer systems as well!) to then arrive at:

> There’s going to be a new world of haves and have-nots. Where in 1970 you had or didn’t have a mainframe, and in 1995 you had or didn’t have the Internet, and today you have or don’t have a TLS cert, tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.

The king is dead, long live the king!


"...you can rent seek on proprietary peer to peer systems as well..."

I still use a non-proprietary one that predates Tailscale and that is not OpenVPN. It is small and simple enough even I, a non-programmer, can make modifications.

It's possible one ends up using client-server in order to achieve peer-to-peer because not everyone has an internet-reachable, non-firewalled IP address. Using some hosting company's server to run a "supernode" may be required. No traffic needs to pass through it if it is used only as a "rendezvous server" so the cost can be minimal.

Companies that try to compete with "free" always draw high scrutiny from me. Stop using that free software and start paying us. We added 100 unnecessary "features".

Not doubting this "corporate strategy" can succeed, at least short-term. Look at Slack. But these subscriptions are not for me.

Client-server versus peer-to-peer is misdirection. The real issue is proprietary versus non-proprietary. IMHO.


What is the non-proprietary option you are referring to?



tinc-vpn is great. I use it to build L2 mesh islands and then quagga to route between those.


Not sure if parent means wireguard, but my GitHub page has a way to get around cgnat using wireguard for use with a Nintendo switch (or any wifi/etc device that doesn't run an editable OS)


Wireguard is L3 not L2.

re: GP comment. It really does not matter which non-properietary solution one chooses. It is personal preference. I know what I like but others might not like it. There are many options to choose from. And (I hope) there will continue to be more.


True, but you can make a L2 mesh network with a bunch of WG endpoints with tools built into the linux networking stack easily:

https://gitlab.com/NickCao/RAIT

https://github.com/m13253/VxWireguard-Generator


Hopefully referring to the (excellent) sshuttle:

https://github.com/sshuttle/sshuttle

... which allows you to turn any system you have an ssh login on into a VPN endpoint.


Wasn't sshuttle created by the now CEO of Tailscale?


Yes, I think so - original project is at:

https://github.com/apenwarr/sshuttle

... and I had not made that connection before ...


Lol. The tailscale CEO is preaching “tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.”??

That won’t go down well in 10 years if they don’t become Microsoft-scale juggernauts.


Or they'll just get bought by Microsoft, Amazon, or Cloudflare and that'll be that

I like Tailscale just because it's OpenVPN without the unbearable agony of setting it up so it actually works


Yeah it's a weird flex. I'd use Tailscale today if it was open all the way up and down.

If not, why bother? TLS and http don't charge licensing fees...


You can use tailscale without using tailscale hosted components, using purely open source parts.

I have switched where possible, both my own networks and clients, to use headscale which is folly open source coordination server compatible with tailscale.


Honestly, I kind of missed Hamachi in the last decades.

It was such a superb and easy to use tool to design/configure your own private networks at the time. Filesharing, local game LANs, development cooperation, heck, even media streaming was so easily done at the time.

Personally I think that the future of peer to peer isn't tailscale, it's more someting along the lines of a selfhosted hamachi variant that's able to put generically nodes together from all across different NATs and ASNs, generically understanding NAT breaking techniques and STUN/TURN/turtle routing.

A tool like this that could also allow remote users to chime in without a centralized VPN gateway would be a killer feature for the modern world.


That sounds a lot like Headscale.


The issue I have with tailscale/headscale is that its focus isn't being an end user app that people can start on demand.

Hamachi was different because a child could use it (literally). It was designed like an instant messenger, and you could easily create groups and invite friends for a LAN party. No IP masks, no hashes, none of that complicated stuff was necessary.

I'd only see maybe a tool that was built on top of headscale that could do that, but headscale's focus is too far off for something like that, and in my opinion too low level.


Rent seekers are bad! Don't you all hate landlords?! Now let me tell you why you should pay rent to me as well as everyone you currently pay rent to!


"An incredibly long ramp up ..."

Agreed. We would all do well to learn about, and begin implementing, "Iceberg Articles":

https://john.kozubik.com/pub/IcebergArticle/tip.html


This feels like an overly-complex treatment of the Inverted Pyramid in journalism: https://en.wikipedia.org/wiki/Inverted_pyramid_(journalism), or Bottom Line; Up Front: https://en.wikipedia.org/wiki/BLUF_(communication).

Start with the important statements, then expand. Doesn't have to be the "Tell you what I'm telling you, tell you, tell you what I told you" format that many (American) students were taught, but starting with your thesis statement does help ground it.

On the other hand, the topic blog is somewhat of a story, and I can hear the presentation being given behind it. It's just translated 1:1 to a blog, which is a different medium.


BLUF is bad, it's precisely a technique borne in the the world of newspaper publishing for writing catchy articles (what is now called clickbait). Classical philosophical writing is the exact opposite: start with some problems, elaborate in high detail and finish with a conclusion (the name says it all).


Clickbait is BLUF with a deceptive bottom line (BL). Clickbait is bad. You can choose to write in BLUF style without that.

In my experience, I only prefer "Classical philosophical writing" when I'm already convinced of reading the content (e.g. know the author, interested by the subject).

In almost all other cases, I prefer BLUF format: i.e. "get to the point, I'll read more if I'm intrigued".


I'm one of the people who actually use Tailscale for production systems where there are servers physically close to me, or at some other controlled locations, and then there are hundreds of users hundreds kilometers away, all working via Tailscale.

I should say two things. Tailscale is amazing and I love it. The system could not exist without it, or I'd have to have at least ten more people in my team to manage all this 24/7. It's working, and it's good enough.

That being said, you do need to lower your expectations: it's not as good as "the internet". The latency spikes periodically, the connection drops sometimes, the MagicDNS just magically stops working or interferes with the system. Since we have many users, we've encountered every possible problem one can encounter, and then there's still something new you'll see tomorrow.

In any case, we believe in Tailscale and its vision, it's a categorically new approach that simultaneously gives you the control on hardware, reduces the cost, and improves the security. Our first big production server was a 4-core Linux Laptop!

We love Tailscale and we wish the product prosperous life and development. Thank you TEAM TAILSCALE!


Would you mind going into more detail about the 4-core Linux Laptop as a production server via Tailscale, please? I too use Tailscale and love it for self-hosting internal stuff but I never thought about using it for public facing production stuff. Now I'm really curious to hear more about your setup (if you're willing to share of course).


Not the person you’ve asked but I regularly use it for backends for projects. Connect your database machine to your Tailscale network and have your code talk over the Tailscale IP to connect to it. I’ve served multiple Flask frontends that have talked to backends this way - works great! I use various cloud VMs to host the frontend, and they all talk to an Optiplex Micro box under my desk for their backends.


This, plus you get free domain name with HTTPS cert working.


It's very simple. We needed to begin somewhere, and until we've had at least 10-20 active users which load our database with their analytics queries, we wouldn't require a lot in terms of compute power and speed. However, we do have electricity stability problems sometimes (and internet outage), so something which has its own battery and ability to switch to a backup wifi/lte was a great choice as a start.

So that laptop was a "free server" we've had. It's now replaced by a much beefier miniPC.


> there are hundreds of users hundreds kilometers away, all working via Tailscale

Do you require your users to install Tailscale?


I love Tailscale, but this post gives me the creeps. The internet succeeded because it was built on standards and was completely free. With Tailscale, I get wireguard is open source and we have things like Headscale. But the whole everyone gets an IP, doesn’t it depend on Tailscale owning a massive ip address space? We can all wait until full ipv6 rollout, or we can depend on centralized ipv4, and servers and proprietary stuff. Maybe a bit hypocritical?


You can self-host a Tailscale control sever with Headscale[1]. It's not quite at feature parity with Tailscale, but it supports most if not all the current feature set and its improving every day. One of the lead devs is even paid by Tailscale to work on it, IIRC.

I run it for my personal self-hosted infra, and it works really well. Setting a custom control server URL is relatively easy (at least on Windows and Android which I use).

I use taildrop, I serve docker containers to the tailnet, etc. headscale works really well and is worth a go.

1: https://github.com/juanfont/headscale


The question is: how long will Headscale be supported in the official clients - how long will the incentives of Tailscale's VC's align with the freeloaders?

The official clients (most valuable: the polished mobile apps easily installed from the default app stores) are one auto-update away from cutting ties when push comes to shove, the same as all commercial VPNs with a free tier.


The clients are the open source part of Tailscale. They can be forked and built by someone else if required.

However I do not think Tailscale is going to remove the custom control URL feature from their mobile clients. For one, I think there are legitimate "Tailscale Enterprise" use-cases for the custom login server.

Additionally, I have heard that Tailscale has been supportive of the Headscale project, providing assistance to the devs.

Further, Tailscale seems fairly committed to keeping their clients open sourced, and engaging in the developer community. Of course as you can say this can change at any time.


I think clients are the least to worry about. They can be built by someone else if the need arises.


Cool! Any important features you miss when running Headscale?


Mostly support for features relevant to multi tenancy - official tailscale stuff does things like separate "tailnets" that belong to different accounts which have different SSO, but you can share access to hosts between tailnets with ACL rules, etc. Also tailscale funnel which uses tailscale-hosted service to provide ingress to host behind VPN.

And of course the API used to manage the official server, so the rare things that depend on it won't work, but it's more a case that the project doesn't have the need to work on it


Nothing that I've noticed. I actually have never run vanilla Tailscale without Headscale so I'm not sure.

I think auto TLS requires some extra config, and DNS rules. I don't use it so I'm not sure.


DoH DNS support (beyond the single existing NextDNS option)


100.64.0.0/10 is a reserved IP block for carrier-grade NAT.


More info about Carrier-Grade NAT (for others who, like me, are only encountering this term for the time):

https://en.wikipedia.org/wiki/Carrier-grade_NAT

Can anyone elighten me regarding what is different or special about 100.64.0.0/10 vs say, 192.168.0.0 or 10.0.0.0.

Edit: Answered my own question by digging into more wikis, there is a helpful table of reservations and intentions here: https://en.wikipedia.org/wiki/Reserved_IP_addresses


> Can anyone elighten me regarding what is different or special about 100.64.0.0/10 vs say, 192.168.0.0 or 10.0.0.0.

A bit of context: if an ISP cannot get enough IPv4 addresses for the WAN-side of people's home routers, some problems exist:

* something in 192.168/16 is generally used for the LAN-side of people's home routers, so that cannot be used on the WAN side

* 10/8 is used for business/enterprise corporate networks, so it also cannot be used on the WAN side because if people VPN connect to the corporate, then the router may get confused

* similarly for 172.12/12: often used for corporate networks

So the IETF/IANA set aside 100.64.0.0/10 as it had no 'legacy' of use anywhere else, and is specifically called out to only be used for ISPs for CG-NAT purposes. This way its routing does not clash with any other use (home or corporate/business).

    IPv4 address space is nearly exhausted.  However, ISPs must continue
    to support IPv4 growth until IPv6 is fully deployed.  To that end,
    many ISPs will deploy a Carrier-Grade NAT (CGN) device, such as that
    described in [RFC6264].  Because CGNs are used on networks where
    public address space is expected, and currently available private
    address space causes operational issues when used in this context,
    ISPs require a new IPv4 /10 address block.  This address block will
    be called the "Shared Address Space" and will be used to number the
    interfaces that connect CGN devices to Customer Premises Equipment (CPE).
* https://www.rfc-editor.org/rfc/rfc6598.html


Interesting, I thought docker uses 172.*.


Yes, 172.18/16 by default.

And that actually was a problem at a previous job I was at: when COVID hit our VPN address range just happened to be set to be in that range, and so a bunch of developers were having issues. (IIRC, we re-configured the VPN appliance to use something else.)


…and it’s a perfect display of the technical competence of Docker Inc. :) they do stuff like that, in all kinds of domains, all the time.


It does; 172.16.0.0/12 is just another RFC1918 internal subnet.

Edit: I should say, a subnet that docker carves smaller subnets out of for its networks.


>But the whole everyone gets an IP, doesn’t it depend on Tailscale owning a massive ip address space?

No, because Tailscale isn't "the Internet", it is a bunch of disconnected moats. The IP space needed by Tailscale only has to be as big as the largest moat. And you can only be connected to a single moat at a time.


If you had to move off of tailscale, what would you move to?


Zerotier is I think the obvious answer? I haven't used it though; it's more proprietary, not less.


AFAIK, Zerotier is about equally proprietary, more-free (as in beer), and has been doing the node-to-node mesh thing instead of spoke-and-hub longer than Tailscale has been in existence.

And if I remember correctly, ZT was initially created to provide something like this "New Internet" concept that Tailscale has apparently recently discovered, except they called it "Earth" and abandoned it in 2023.

(Some things don't change, I guess.)


Tailscale does p2p, not hub-spoke, with additional DERP system which combines various NAT bypasses with worst case hair pinning over HTTPS - you can host all components yourself.


You're absolutely correct.

I didn't intend to leave to implication the fact that Tailscale is node-to-node, or that it is is not hub-and-spoke.

(I even had this up in a browser tab when I wrote that previous comment: https://tailscale.com/blog/how-tailscale-works)


Kinda? It works great in practice. You can run your own controllers if you want which completely disconnects you from the proprietary service. But the code is BSL.


I didn't mean to suggest it doesn't work well, as I said I've not used it.

It's still proprietary if you self-host it, I was thinking in particular that tailscale uses Wireguard and Zerotier uses something custom, i.e. proprietary. Note that the context was:

> The internet succeeded because it was built on standards and was completely free. With Tailscale, I get wireguard is open source and we have things like Headscale. But [...]

to which the commenter I replied to asked of alternatives. So I wasn't saying tailscale great and open and standards compliant, and Zerotier not; I was saying it's the obvious competitor but if that's your problem with tailscale then it's if anything worse in that regard.


I use WireGuard. As you add more keypairs, it becomes a bit of a nightmare to maintain, though Vim with syntax highlighting helps a lot.

Because of this, I'll be switching to Headscale + Tailscale.


It depends on your use case. I use wg back to two geographically independent locations, keys are managed via our ipam.

I don’t need EW traffic over the VPN, very NS based. Something like Headscale or another SDWan solution (automatically establishing vpn routes) would make sense if I needed to transport a lot of traffic E-W, that’s just not a requirement


I think nebula is the obvious FOSS competitor? With the unfortunate exception of the Android client being closed source.


I use Nebula because its iOS client does not drain my battery. Tailscale has had that known bug for years and they never managed to fix it, which is a major deal breaker.


They have released a slew of updates recently to fix this, and they did a complete rewrite of the Android app


OpenZiti would be another - https://openziti.io/. I work on the project. 1 issue with Nebula is the provisioning new clients with identities. Its not completely open sourced by the Nebula company.


I think Nebula is much much closer to the "new internet". Lighthouse nodes can serve as untrusted brokers that help to connect everyone securely. No need in a central authority with God-like importance, as the Tailscale CEO obviously wants to have.


NetBird is a promising option. OpenZiti is another. ZeroTier hasn't evolved much, IMHO. Would also love to see someone breathe new life into https://github.com/omniedgeio/omniedge


I like Tailscale, but this reads as too self-aggrandizing.

You have a mesh VPN product with some value-added services on top of it. That's great, but this idea isn't novel or unique. Why should your solution be the "new internet" instead of any of the alternatives?

I wouldn't want to rely on a single company for all my internet infrastructure, anyway. So I'll stick with the traditional internet with all its complexity. Its major problems aren't technical but social, and no new technology will solve those.


> Its major problems aren't technical but social, and no new technology will solve those.

Really? Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage? I view that problem as deeply technical. Unless if by "social" you simply mean everyone should become an experienced sysadmin. (or the slight variation of, everyone should know an experienced sysadmin who's willing to run their application for them for free)

Take something as mainstream as social media. Imagine a world where Facebook/Twitter/TikTok/YouTube/Reddit/HN/etc worked (seamlessly) like bittorrent. An application on your machine when you run it, it joins a "Facebook" network where your friends see you online through their instance of the application. Your feed/wall/etc is served to them directly from your machine. All your communication with them is handled directly between the 2 (or 1000 or millions) of you. No centralized server needed. You can easily extend and apply this majority of centralized application today. The only ones I can think of where this wouldn't work would be inherently centralized services like banking for example.

There are already plenty of p2p networks that show that this is a viable solution. Bittorrent, soulseek, bitcoin, etc.

All the problems you will run into however to make this as seamless as just connecting to facebook.com are purely technical. The initial big hurdle is seamless p2p connectivity. That is without port forwarding, dynamic dns, and requiring advanced networking, security, and other sysadmin knowledge from every user. Next would be problems like what happens when the node is offline? What happens to latency and load if you need to connect to thousands, hundreds of thousands, or millions of machines just to pull a "feed"? How is caching handled? How are updates/notifications pushed? How do nodes communicate when they are wildly out of date? Where is your data stored? How do you handle discoverability, security, etc.

All deeply technical problems. Most are solvable, but you're gonna have to invest a significant amount of effort to solve them one by one to reach the same brain-dead simple experience as a centralized service. The fediverse has been trying to solve just a small subset of these problems for over a decade now, and the solutions still require a highly capable sysadmin to give users a similar (or only slightly worse) experience than twitter.com.


> Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage?

Not quite. The internet _is_ decentralized. What made the web so centralized from the start could partially be the result of lacking tools that made publishing as easy as consuming content. I.e. had we had a publishing equivalent to the web browser, perhaps the web landscape would've been different today. You can see that this was planned as phase 2 of the original WWW proposal[2] ("universal authorship"), but it never came to pass AFAIK.

So you could say that the problem is partly technical. But it's uncertain how much this would've changed how people use the web, and if companies would've still stepped in to fill the authorship void, as many did and still do today. Once the web started gaining global traction in the early 90s, that ship had sailed. People learned that they had to use GeoCities to publish content, and later MySpace, Facebook and Twitter. These services gained popularity because they were popular.

There have been many attempts over the years to decentralize the web, but now the problem is largely social. As you say, we've had the fediverse for over a decade now. How is that going? Are technical issues still a hurdle to achieve mass adoption, or are people not joining because of other reasons? I'd say it's the latter.

Most people simply don't care about decentralization. They don't care about privacy, or that some corporation is getting rich off their data. They do care about using a service with interesting content where most of their contacts are. So it's a social and traction issue, much more than a technical one. The only people who use decentralized services today are those who care more about the technology than following the herd. Until you can either get the average web user interested in the technology, or achieve sufficient traction so that people don't care about the technology, decentralized services will remain a niche.

There is another technical aspect to this, though. Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data. Many things will need to change on the operational side before your decentralized dream can become reality. I think this landscape would've also been different had the web started with decentralized principles, but alas, here we are.

[1]: https://info.cern.ch/hypertext/WWW/Proposal.html


> As you say, we've had the fediverse for over a decade now. How is that going?

Convenience trumps everything. All the parts of the iPhone existed for a few years before it -- especially PDAs with touch pens -- but what made the iPhone succeed was putting everything into convenient and easier package.

The amount of time worked on thing X has almost zero correlation with its adoption, as I think all of us the techies know.

> Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data.

While that is true, let's not forget half-solutions like TeamViewer's relay servers, Tailscale / ZeroTier coordinators, and many others. They are not a 100% solution but then again nothing is nowadays; we have to start somewhere. I agree that many ISPs would be very unhappy with a truly decentralized architecture but the market will make them fall in line. I have no sympathy for some local businessmen who figured they will run with tens of millions with $50K investment. Nope, they'll have to invest more or be left out.

So there would be a market reshuffling and I'm very OK with it.

---

But how do we start off the entire process? I'd beet on automated negotiation between nodes + making sure those nodes are installed on much more machines. I envision a Linux kernel module that transparently keeps connections to a small but important subset of this future decentralized network and the rest becomes just API calls that would be almost as simple as the current ones (barring some more retry logic because f.ex. "we couldn't find the peer in one full minute"). I believe many devs would be able to handle it.


The Holochain project has invested the last 5 years, solving each of these problems…

You can now build Internet-scale distributed systems, with or without requiring centralized (eg. DNS, SSL certs, etc.).

In other words, massively distributed apps without any means for centralized authorities to stop them.


What's Tailscale's value prop? It's just a kludge together of various FOSS heavyweights..?


Tailscale value proposition is that it just works, all the parts you normally have to assemble yourself already built and connected, with value add in terms of ACL support publicly and things like funnel (exposing services over HTTPS on nodes that don't have a public IP) and taildrop (easy file sharing).

Also various integrations, like tailscale k8s operator.


precisely. Isn't every business that plus some custom stuff & employee time?

i.e., isn't some business is just a kludge of FOSS heavyweights, say for example, when they write an app in some open source language, deploy it on an open source OS with open source orchestration etc

I think Tailscale is a lot of foss software, with the utility that it lowers the barrier to entry massively


Oh, you mean like most IT businesses nowadays?


Oh please. The value proposition is that they spend time so you don’t have to, on something that isn’t your actual product. If you don’t understand why that provides value to other businesses, you probably don’t run one?


Isn't yggdrasil[1] supposed to be the New Internet?

If not, why Tailscale specifically, and not Netbird, Nebula, Netmaker or some other competitor?

The article is indeed very well written, but gives the wrong vibes, like something's coming: acquisition, pivot, split, shutting down, etc. Also, "we're re just getting started", the famous last words.

Just to balance my healthy mistrust, I'd like to add that I'm a satisfied Tailscale user, mostly impressed with how little it requires of me to just work.

[1] https://yggdrasil-network.github.io


Yggdrasil would indeed better fit the description. It should however be noted, that, while it does work great, is still a research project.


I really enjoy and appreciate the tailscale service, but this article didn't click for me. I love an inspiring CEO rally speech as much as the next early adopter, and agree that there is a ridiculous amount of developer friction and complexity in computing, but tailscale still has its own friction and isn't on track to solve the big picture issues _at all_.

As a concrete example, a few weeks ago, I invited my dad to my tailnet with the intent of using remote desktop into his machine to help him fix something. He accepted the invite, and then I couldn't ping his machine despite it appearing in my TS domain web interface.

Now he hates tailscale, and I lost credibility because prior I told him how awesome it is. In his view, it wasted his time and doesn't "work right", and metadat is a fool.


Is your dad running Windows? Windows firewall is known to block icmp traffic, a problem that neither Tailscale nor any other p2p VPN can solve.


Maybe, but even ICMP pings? He also couldn't ping my systems, it seemed really broken.


Ping uses ICMP. Windows blocks ICMP by default, so yes `ping <windows-host>` doesn't work by default. Is your system your father was trying to ping a Windows system as well?

The other thing to check is if he was running another VPN on his machine at the same time. Running multiple VPNs at the same time (both Windows and Linux) requires extra fiddling to map the routing correctly to prevent their rules from overlapping/breaking each. https://tailscale.com/kb/1105/other-vpns


No other VPN, but my windows machine firewall is on and it pings fine.

Anyway, tailscale still has more to go. Inviting someone to your tailnet doesn't seem to be the same as adding a machine yourself.


Oh yeah, forgot to mention. On a given tailnet, users can only reach their own machines. Each machine that joins the network has an “owner” shown under the machine name in the admin portal. By default users can only reach their own machines, not everyone’s else’s. As the network admin you can manage that through the ACLs tab.


And this is why tailscale isn't solving the fundamental issues of connectivity. Thanks and cheers eddythompson80.


What is the alternative, here? Letting all machines on a tailnet talk sounds like a security issue. Maybe a better onboarding flow that prompts you to set ACLs when inviting a new user?


It seems you're assuming the firewall or my machine configuration was the issue rather than a tailscale "sharing" feature issue.

I am, among other things, a network engineer, and previously I shared my tailnet with my brother's windows machine by logging him into my account directly, and it worked flawlessly.

I want TS to win, but they've got product and engineering work to do if they're serious.


I don’t know if it’s the same issue, but the problem I ran into is that I misunderstood how it works for families who just use gmail addresses. It’s quite counter-intuitive. The organization stuff isn’t for you - instead each person creates their own tailnet and you connect them. See:

https://github.com/tailscale/tailscale/issues/10731


If this was the source of issues, it's still a product failure. I hope they will address it.

If I can't figure it out, 99% of others won't either.


probably a new network interface gets created and either your dad or windows decided that it was not a "home/personal" i.e trusted connection where answering pings or rdp is a good idea.

teamviewer, anydesk et.al are made for the task


ACL issue?


I think the author misdiagnoses the problem, and the proposed solution simply hides the centralization instead of removing it.

The reason AWS is expensive is not because of IPv4, or the datacenters. It's mostly in their software/managed offerings, and the ability to quickly add more servers. If you are a "serious company" and you don't want to pay AWS or a similar company, renting a rack and colocating your own servers (either within your premises or in a datacenter) is doable and done by lots of companies.

I disagree that certificates have caused centralization, and they're not something separating the haves and have-nots and are in no way comparable to having or not a mainframe. HTTPS becoming pseudo-mandatory didn't push people into having their own (sub)domains, which is nowadays the only requirement to obtain a certificate. It already happened out of convenience.

The other point of centralization mentioned is DNS, which tailscale doesn't avoid at all. MagicDNS still relies on the ICANN root, as does the tailscale control plane. And if all you wanted was a free subdomain, there are plenty of people offering that.

If you are behind CGNAT, tailnets aren't particularly less centralized, as traffic has to flow through the DERP servers. I doubt tailscale can keep providing these free of charge when the volume is in the tbps instead of the gbps.

I agree that tailscale (and similar solutions) help in the last remaining case, which is accessing your computer that is behind a NAT. I even think they could reach the dozens of millions of users. This is, in my opinion, not enough to claim the title of "the new internet".


This rests on the incorrect assumption, pointed out in the post, that most applications need the kind of scale that warrants quickly scaling to more servers.


The article tried to make k8s all about scale, when that's not the whole story.

On other socials, a screenshot of the 'Not scaling' section is getting responses of "Those idiot developers think they need k8s scaling for their 1 req/s sites, ha ha."

The author brags about being able to (skip testing, CI/CD pipelines and just) edit their perl scripts (in prod,) really quickly.

What uptime is associated with that practice? As many 9's as it takes for Brad to debug his perl program in prod? This approach doesn't even scale to 2 developers unless they're sitting in the same room.

DevOps isn't a machine where you put unnecessary complexity in one end and get req/s out the other end. It's about risk and managing deployments better.

If I really wanted to engineer for req/s, I'd look at moving off k8s and onto bare metal.


In an enterprise environment, I'd like a networking solution that allows me to run an app on my own office workstation and expose it as a service to some part of the company at an SLO that can be reasonably be guaranteed with a workstation: 99.9%. That would allow to cut so much time in "productionizing" stuff that doesn't need CI/CD pipelines or deploy to a datacenter: just me editing a Python file and restarting it.


Of course these ideas are not that new. IPv6 was supposed to give end-to-end connectivity to all, and originally IPsec was supposed to be mandatory part of IPv6, giving each internet host cryptographic identity. And so on.


I was curious why the article didn't mention IPv6 at all, since Tailscale does support it.

IPv6 -together with WireGuard- gives privacy, security, and performance. The downside is the complexity to set it up.

Tailscale builds on the shoulder of giants. IPv4, WireGuard, Samy Kamkar NAT punching, OpenSSH, and probably many more. One of the upsides is the combination of these, and that the management interface in general is easy. But what counts for CA is also true for Tailscale: both are using FOSS to in the end deliver a (proprietary) service.

But because almost everything is build on top of FOSS and there's Headscale (and they're cool with it), this isn't a major issue to me. Like, it is a downside, but not a major one, as vendor lock-in is practically non-existent. In fact, it is likely an upside from a business/support PoV.


I think there’s a common misunderstanding that with IPv6 anyone can connect to anyone else. That’s not true.

My laptop has an IPv6 address, as does the router that routes its traffic. There’s no NAT, that’s true, but there’s still a firewall — only inbound packets from a destination host and port that have been sent to are allowed in. And in enterprise environments, from what I’ve seen, there’s a symmetric NAT on IPv6 anyway — packet comes from a different IPv6 address and randomized port than the one client sent it from, making peer connectivity impossible, as the source port varies by destination host and port.


Apenwarr is kind of an IPv6 hater. He thinks it's not going to happen.


> Apenwarr is kind of an IPv6 hater. He thinks it's not going to happen.

Well, T-Mobile US is 100% IPv6:

* https://www.youtube.com/watch?v=QGbxCKAqNUE

Facebook is IPv6-only on their internal infrastructure:

* https://www.internetsociety.org/resources/deploy360/2014/cas...

Microsoft has been moving to IPv6-only for their corporate network (so IPv4 address can be used for revenue-producing Azure):

* https://www.arin.net/blog/2019/04/03/microsoft-works-toward-...

So he better tell those folks that IPv6 is not a thing.


Because IPv6 is mistake. Thats why market does NOT want it. Unfortunately, we all start to feel the heat of IPv4 exhaustion.

Anyway, remember IPv4 classes? Then they made it classless. IPv6 is not 128bit, its just 64bit with 64bit host address. So, first mistake. IPsec mandatory? pure stupidity. Crypto moves fast, every 10 years many protocols are obsoleted. How you will provide E2E connectivity with that?

In 1997 IPv6 was seriously immature yet to start migration. Additionaly, it was very different from IPv4 so was mostly ignored. What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done. As bonus, they should think about some basic IPv6 -> IPv4 interop so clients would NOT need to be dual stack. And that could work back then. Now we are fucked.


The thing that you and all other ipv6 haters miss is that none of that matters. Ipv6 is happening, like it or not. And that had been the state already for 10-15 years.

Maybe in the 00s there was window when there might have been true doubt if ipv6 was going to happen. But after that, it was just question of "when", not "if".

Keeping on hating is simply not very productive. It's just much better to embrace ipv6, no matter it's possible flaws.


> What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.

Tell me you've never had to seriously design and operate networks at scale without telling me etc...

This is a bit like Chesterton's fence - until you understand why (for example) ARP is a hilariously bad design and a major problem when trying to design networks at scale, then you can't understand why someone might want to replace it with something more effective. IPv6 doesn't get a lot of stuff right, but the motivation behind replacing v4 was much more than simply "more addresses pls".

IPv4 was the mistake, Vint Cerf is on the record as saying so. Should never have been let out of the lab.


Its not so easy. First, yeah I was operating networks, not maybe hyperscalers, but 200+ switches. Yes, ARP had a its problems, like ARP poisoning, but they are all sorted out already. IPv6 took its ND and bring a lot of other problems that we are solving AGAIN. Pure waste of time and effort.

Also, please cut the crap about IoT and Hyper IP networks centrally managed. Thats just serveral huge corporations. Majority is Internet is small/medium shops doing it completly different. Yet, big boys pop in and say you do it wrong, you must do it our way or go away. Not nice.

Yeah, that motivation become overengineering. They provided protocol that does NOT fit the needs it seems.

IPv6 will probably happen indeed. I doubt someone will popin with great protocol that will make IP obsolete.

Also, I wish IPv6 would really took off, because even if I personaly dont like IPv6, it success would provide me IPv4 address space for my retro networking projects.


IPv6 has already happened. I’m reading Hacker News on IPv6. Google is 45% adoption.

The place that is behind are corporate networks.


> Its not so easy. First, yeah I was operating networks, not maybe hyperscalers, but 200+ switches. Yes, ARP had a its problems, like ARP poisoning, but they are all sorted out already.

ARP poisoning is the least of ARP's problems.

It can potentially have a blast radius that can bring down networks, and if it was actually sorted out, then things like BGP EVPN would not need to have been invented. One of touted benefits of BGP EVPN is reduced ARP and Layer 2 broadcasts.

I've seen ARP storms bring down even 'small' company networks (dozen switches for ~200 people) because someone fed a simply desktop switch back in on itself and the access layer switch in the closest could not do STP with the simple switch.


That is not ARP problem. Its called broadcast storm and its problem of stupid people and/or bad equipment. You can bring any network down with incompetence.

Thats why newer switches have STP, DHCP Snooping, ARP security and so on. Now take a look at ND tables exchaustion alone. Trival attack to do on IPv6 segment. Is it solved yet? I dont know. I do NOT track it.

The whole PnP (I call it Plug and Pray) is terrible aproach imo. IoT created hella of security problems (biggest DDoS botnets are IoT). If someone need autoconfiguration, he can slap DHCP on segment. Easy and super old protocol on IPv4. (IoT connected directly to internet? thats stupidy.. but I will leave that to other talk).

So, IPv6 should be simple, easy to implement and so less prone to mistakes. All extras should be put layer up.


> That is not ARP problem. Its called broadcast storm and its problem of stupid people and/or bad equipment. You can bring any network down with incompetence.

It's a footgun. All footguns have ways to not trigger them, but saying you can't blow off a leg is also inaccurate. Reducing the number of footguns laying about is generally a good thing

> Now take a look at ND tables exchaustion alone.

No different than ARP table exhaustion (a finite L2-L3 mapping table). "First hop security" is a thing in both protocols.

> So, IPv6 should be simple, easy to implement and so less prone to mistakes. All extras should be put layer up.

I would argue that IPv6 is simpler to get going than IPv4: to start you don't need BOOTP/DHCP. In fact IPv4 later took some ideas from IPv6, e.g., 169.254.0.0/16 link-local addresses:

    This document describes a method by which a host may automatically
    configure an interface with an IPv4 address in the 169.254/16 prefix
    that is valid for Link-Local communication on that interface.  This
    is especially valuable in environments where no other configuration
    mechanism is available.  The IPv4 prefix 169.254/16 is registered
    with the IANA for this purpose.  Allocation of IPv6 Link-Local addresses
    is described in "IPv6 Stateless Address Autoconfiguration" [RFC2462].
* https://datatracker.ietf.org/doc/html/rfc3927


It looks simpler to start with.. Aka PnP.. you just plugin in stuff, SLAC and later ND discovery kicks in and vioala, we have network up and running. But somehow I see it less managable and controlable. In Enterprise networks this is a serious issue. We need static IP, we need well known subnets, because we run FWs everywhere.

And yeah.. soft LL in IPv4 is good idea. You can use it. In IPv6 you are forced to use it. Oh thank you, OSPFv3 configuration is so cool on in IPv6..


> In Enterprise networks this is a serious issue. We need static IP, we need well known subnets, because we run FWs everywhere.

Yes, and there are tools and procedures for that:

* https://datatracker.ietf.org/doc/html/rfc9099

But as the old saying goes: easy things should be simple, and hard things should be possible. I think IPv6 does that.


>What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.

This is literally what they did, except they made it 128 bit rather than 64.

The thing you're missing is that literally every IPv4 protocol breaks the second you change bit count. Before you change the 32-bit header you need to (a) redefine bit for bit every IP protocol so it can be understood by each IP capable device (b) somehow send a full-proof update to every IPv4 device in the world redefining how they ought to interpret IPv4 headers.


I do NOT miss that point. The point is, new protocol should not be very different from previous one, unless its really necessary. After all those years and R&D put into IPv4 to make it better, we ended up with decent protocol. The only flaw is too small address space. With current IPv6, you have to throw up half of the stuff you know about IPv4 for, imo, no valid reason.

And I will tell it again to be clear. Im not fan of some IPv4+ contraption ideas like lets extend IPv4 address space and try to keep it IPv4. Thats DUMB. Make new protocol, improve things that were bad in IPv4 (are they any?) and try to make it one way interop to IPv4 (IPv6 -> IPv4) and we are done.

Remember that you are building protocol for entire planet. It have to be relativly simple and easy to implement. Any extras should be layer up. The whole IoT crap annoys me a lot. This stuff should NEVER ever be connected directly to internet. It creates huge security mess. There should be IoT GW to handle IP <-> (whatever IoT proto) and provide security.


>I do NOT miss that point. The point is, new protocol should not be very different from previous one, unless its really necessary.

>>The only flaw is too small address space.

>>>With current IPv6, you have to throw up half of the stuff you know about IPv4 for, imo, no valid reason.

ARP, DHCP, NAT, Lack of built in encryption are all huge problems that had to be addressed.

- ARP: incredibly inefficient, prime vector for abuse by malicious actors via arp poisoning

- DHCP: Man in the middle attacks, need I say more?

- NAT: Literally breaks the whole concept of IP addressing, incredibly inefficient as it requires manipulating packets mid-stream, literally designed as a temporary band aid to smooth our transition away from IPv4

- Built in encryption: You say this makes this more complicated but I believe it is the opposite, better security is built into the foundation rather than having to build it into every protocol on top of it. (ssh instead of telnet, SFTP instead of FTP, HTTPs instead of HTTP, ect) The issue I'm having with your argument is that you're saying that "you're fine with a replacement IP protocol which ditches the bad" and then go on to deride IPv6 for doing exactly what you're asking for. (keeping it as close to IPv4 as possible while ditching the biggest sources of technical debt)

>And I will tell it again to be clear. Im not fan of some IPv4+ contraption ideas like lets extend IPv4 address space and try to keep it IPv4. Thats DUMB.

But you literally did suggest exactly this when you said:

>What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.

Did I somehow misinterpret this?

>Make new protocol, improve things that were bad in IPv4 (are they any?) and try to make it one way interop to IPv4 (IPv6 -> IPv4) and we are done.

IPv6 does provide a way to do exactly this, it's called NAT64 https://en.wikipedia.org/wiki/NAT64?useskin=vector

>Remember that you are building protocol for entire planet. It have to be relativly simple and easy to implement. Any extras should be layer up.

Again, this really makes me think you don't work in networking. When you abstract security from the underlining protocols you essentially leave a gaping hole in your security. The only surefire way to communicate securely is to bake encryption into the protocol itself. (and even then it is hit or miss)

This is why we moved from HTTPv2 to HTTPv3 This is why we stopped wrapping telnet into IPsec Tunnels and opted for SSH, this is why we stopped wrapping HTTPv2 in TLS tunnels and baked it into HTTPv3, and so on.

I don't want to spend a lot of time on IoT but as a network engineer I can say that they exist whether you like them or not and make up a large portion of traffic so we can't just not consider them when talking about how network protocols ought to be designed.


Yes, ARP had its problem, but they are solved right now. We have knobs in managed switches to handle it. ND just moved problems somewhere else, please read about ND table exhaustion and attacks.

DHCP snooping, need I say more? Also, if you are operating on network that is high security risk, you just layer VPN on top of it. Thats why they got invented in first place..

NAT is not that bad after all imo. I like its feature that my LAN is decoupled from WAN. Im multihomed and I do not need to bother annoucing prefixes to both ISPs.

Yes, you still misinterpret my statement. I mean: take IPv4 and just extend its address space and create new protocol out of it. It will not work with IPv4 itself because its not possible to do. But why take old IPv4 instead creating something from scratch? Simple, IPv4 works very well, why to trash last 30 years of R&D put to it? Sure, if you can came up with something better, go ahead. IPv6 did not deliver the promise.

Security is not that simple like, slap encryption everywhere and we are done, its more complicated matter. Encryption, control, management, endpoints security, router security. Whats the point of encryption of your device can be compromised due to shitty mgmt and traffic MITM again? Or whats the point of encryption if it can be cracked within hour doing MITM again due to protocol got old.

Yeah, HTTPv3.. created yet another problems that needs to be solved now. Why every time something new pops in, it trash past protocol R&D put to it, bringing same on similar problems AGAIN. Thats pathetic.

IoT, thats good example actually. It have E2E encryption (mostly its all HTTPS) and yet its p0wned so easly creating huge DDoS networks. Im starting to wonder if you have any security clue at all.


> Crypto moves fast, every 10 years many protocols are obsoleted. How you will provide E2E connectivity with that?

Negotiation. IPsec using IKEv2 (RFC 4306/7296) started with (e.g.) 3DES when it was initially released, but now allows for AES (RFC 3602, 3686, etc), as well as other algorithms:

* https://www.iana.org/assignments/ikev2-parameters/ikev2-para...

> What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.

For anyone curious, the technical criteria for choosing the (then-labelled) IPng:

* https://datatracker.ietf.org/doc/html/rfc1726

And the evaluation of the available candidates and why the winner was chosen:

* https://datatracker.ietf.org/doc/html/rfc1752

One of the IPng candidates, SIPP, indeed did extend addressing from 32-bits to 64-bits (RFC 1710, RFC 1752 § 7.2), but it was deemed that it may not enough and another transition would be even more difficult, so they went with 128-bits (RFC 1752 § 9).

Adding mechanisms for auto-configuration was one of the criteria for IPng; per RFC 1726 § 5.8:

    CRITERION
       The protocol must permit easy and largely distributed
       configuration and operation. Automatic configuration of hosts and
       routers is required.
    
    DISCUSSION
       People complain that IP is hard to manage.  We cannot plug and
       play.  We must fix that problem.
       
       We do note that fully automated configuration, especially for
       large, complex networks, is still a topic of research.  Our
       concern is mostly for small and medium sized, less complex,
       networks; places where the essential knowledge and skills would
       not be as readily available.
       
       In dealing with this criterion, address assignment and delegation
       procedures and restrictions should be addressed by the proposal.
       Furthermore, "ownership" of addresses (e.g., user or service
       provider) has recently become a concern and the issue should be
       addressed.
       
       We require that a node be able to dynamically obtain all of its
       operational, IP-level parameters at boot time via a dynamic
       configuration mechanism.
       […]
In a world of IoT, not having to have a BOOTP/DHCP(v4) seems like decent foresight.


> Thats why market does NOT want it.

What market are you talking about ?


Market as a whole Internet. Some ISPs adopted it, smaller fight hard. Enterprise networks are behind too. They all have they reasons.


There are some very valid points here though:

https://apenwarr.ca/log/20170810


Here is a list of of proposals for what could have replaced IPv4:

* https://www.rfc-editor.org/rfc/rfc1454.html

Here are the technical criteria for choosing the (then-labelled) IPng:

* https://datatracker.ietf.org/doc/html/rfc1726

And finally the evaluation of the available candidates and why the winner was chosen:

* https://datatracker.ietf.org/doc/html/rfc1752

If someone doesn't want to use IPv6, then what they're effectively suggesting is that we create a new protocol, and role it out to every smartphone, tablet, laptop, desktop, server, (Wifi) router/CPE, ISP router, SMB router, enterprise switches, and IoT device. Meanwhile we've already effectively run out of IPv4 addresses (e.g., ARIN and RIPE pools are zero) and are just shuffling about whatever is left in auctions.

> There's one thing I forgot to mention in that big long story above: somewhere in that whole chain of events, we completely stopped using bus networks. Ethernet is not actually a bus anymore. It just pretends to be a bus. Basically, we couldn't get ethernet's famous CSMA/CD to keep working as speeds increased, so we went back to the good old star topology.

Except for 802.11 Wifi.


> If someone doesn't want to use IPv6, then what they're effectively suggesting is that we create a new protocol, and role it out to every smartphone, tablet, laptop, desktop, server, (Wifi) router/CPE, ISP router, SMB router, enterprise switches, and IoT device. Meanwhile we've already effectively run out of IPv4 addresses (e.g., ARIN and RIPE pools are zero) and are just shuffling about whatever is left in auctions.

Although I've heard some ideas for a IPv4.1 that suffer from the obvious problem, I think the far more common view is rather that v4 is fine and its only problem is solved by NAT. Which I agree isn't actually a long term solution, but let's try to meet the stronger argument.


> […] I think the far more common view is rather that v4 is fine and its only problem is solved by NAT.

The only reason why NAT is "solving" the problem is because IPv6 is taking some of the pressure off. T-Mobile US has 120M subscribers:

* https://www.statista.com/statistics/219577/total-customers-o...

And they went to IPv6-only:

* https://www.youtube.com/watch?v=QGbxCKAqNUE

There's no way that would work in a no-IPv6 / IPv4-only world. Comcast ran out of 10/8 address space to manage their cable modems: how would that work without IPv6?

Google says India is 74% IPv6:

* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

How would that work with only IPv4?

Even on smaller scales, without IPv6, supporting IPv4 with CG-NAT can get really expensive, real fast:

> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.

* https://community.roku.com/t5/Features-settings-updates/It-s...

* Discussion: https://news.ycombinator.com/item?id=35047624


Self-follow-up:

Google says India is 74% IPv6:

* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

How would connectivity for 10^9 people work with only IPv4? See also China. Each of those countries is 2^30 people, plus add another 2^30 for the continent of Africa, and you're already over 2^31. IPv4 is 2^32 addresses.


Yeah, he's not wrong. I just found his take on IPv6 to be pretty pessimistic at that time. His manifesto from today is much more positive.


I really liked the premise of the post until I got to the last paragraph and had to do a quick double take.

Sure Tailscale makes the internet easier again, but I still have to rely on a landlord. Something I didn’t/don’t have to for the internet. As much as a lot of stuff has been centralized, even today I can connect to any server in the world with just the link.


> I really liked the premise of the post until I got to the last paragraph and had to do a quick double take.

"Be sure to drink your Ovaltine."


> I still have to rely on a landlord.

This is a very good point. Counterpoint is self-hosting Headscale which I mentioned in another comment here: https://github.com/juanfont/headscale

Works with native Tailscale clients with a few config changes. I use it myself.


I'm distracted by all the references to being "old" because the author remembers the 1990s.


Life moves pretty fast.

I wouldn’t be too surprised if the median age of Tailscale’s audience was 24.


People born in 2000 use this?

What is it?

Can we call things for what they are? Is this a VPN? :)

Im tired and I am 34. So tired.


Seeing the average age range of people who put "founder" on their LinkedIn, I'm not very surprised.


> You know what, nobody ever got fired for buying AWS.

> That’s an IBM analogy.

Wow, this dialogue comes in the first episode of halt and catch fire. I didn't know this was a real thing

Here's the clip at 1.51 minutes, if anyone's interested: https://www.youtube.com/watch?v=XOR8mk0tLpc


> In fact, we didn’t found Tailscale to be a networking company. Networking didn’t come into it much at all at first.

I always just assumed they were building some kind of logging software (“tail”scale), used Wireguard to connect hosts, and just kind of stopped there. Don’t get me wrong, Tailscale is a nice way to connect machines. It’s nice because Wireguard is nice.



This long blog post (by the now-CEO of Tailscale), if you skip to the end, describes that parent’s hypothesis is basically exactly correct.

> Update 2019-04-26: Based on a lot of positive feedback from people who read this blog post, I ended up starting a company that might be able to help you with your logs problems. We're building pipelines that are very similar to what's described here.

Update 2020-08-26:

Aha! Okay, for some reason this article is trending again, and I'd better provide an update on my update. We did implement parts of this design for use in our core product, which is now quite distinct from logs processing.

After investigating the "logs" market last year, we decided not to commercialize a logs processing service. The reason is that the characteristics we want our design to have: cheap, lightweight, simple, fast, and reliable - are all things you would expect from the low-cost provider in a market. The "logs processing" space is crowded with a lot of premium products that are fancy, feature-filled, etc, and reliable too, and thus able to charge a lot of money.

Instead, we built a minimalistic version of the above design for our internal use, collecting distributed telemetry about Tailscale connection success rates to help debug the network. Big companies can also use it to feed into their IDS and SIEM systems.

We considered open sourcing the logs services we built (since open source is where attributes like cheap, lightweight, etc tend to flourish) but we can't afford the support overhead right now for a product that is at best tangential to our main focus. Sorry! Hopefully someday.


Wireguard by itself is good, but it isn't nice. Tailscale is nice because it builds on top of Wireguard (which is good) and adds UX stuff (which makes it nice).

Nice requires humane UX.


IPv6 + transport mode IPsec + opportunistic encryption with TOFU or other topologies of trust (including WoT, DNSSEC and PKI). All that is standard, most of it is available and only requires configuration (and, ideally, being turned on by default).

There is very little use for companies like Tailscale in this setup, it’s scalable and works.


Gets killed by IPv6 firewalls.


> Every device gets an IP address and a DNS name and end-to-end encryption and an identity, and safely bypasses firewalls.

Tailscale can certainly be blocked on NGFW firewalls like Palo Alto. I am not a BOFH, but also can’t have random employees circumventing security policies by setting up tailscale and leaving permanent backdoors in my corporate network.

I remember the good old days when everyone had a public IP on the Internet and how easy it was to setup things. It was cool and fun while it lasted. But now things are different and security is a nightmare when we have to deal with things like ransomware.


Tailscale doesn't even try to hide their MP or DP traffic. Last I checked, management was plain https and data was plain wireguard.


> can’t have random employees circumventing security policies by setting up tailscale and leaving permanent backdoors in my corporate network

Tailscale isn't exactly an open door. Only machines signed-in via SSO can access a Tailscale network.

If you don't trust your employees to safeguard their credentials and machines then how do you trust them at all? Keep them in an airtight underground bunker chained to their desks? Not sure what threat you're modeling for...


I'm talking about people who want to use Tailscale for personal reasons. For example someone can setup a Tailscale instance between their work computer and home computer and circumvent the corporate VPN/MFA policies for remote access. I doubt they being malicious but what if their home PC gets hit with malware? A threat actor could then use the existing Tailscale instance to get into the corporate network.


So the answer to the bad old internet is to install tailscale on everything?


I think the message from this post is we’ll pay rent to Tailscale instead of AWS eventually.


Even though I don't agree with the whole "New Internet" thing, this article is very well written.


I agree. This is article reads well. It has flow, good punctuation and pacing. Apart from the message that we will eventually pay Tithe to our new overlord tailscale, it was great.


I mean it was given as an internal presentation about the business strategy, it's the kind of thing I would expect.

1. We took a hard-problem, peer-to-peer networking and IdM, and (mostly) solved it.

2. We're hoping this will drive people to build apps that leverage the unique capabilities of authenticated p2p mesh networks. It doesn't even have to be specifically for Tailscale.

3. People will want to use those apps and (if we're good at our jobs) choose to pay us to run the network for them over our competitors or building something in-house.

4. $$$

I'm not sure I would say this is as nefarious as the tone of the comments here suggest. Wanting a "killer app" for your software platform is pretty normal which is really what he's talking about. I would be nervous declaring victory or an inevitability without being to name what that killer app actually is but trying to figure it out/build it is a good strategy. It's one of those times where the desire for engineers to solve their pet-problems, play with shiny new toys, and build Halo LAN Party over Tailscale is aligned with the business.


> Sure, Apple’s there selling popular laptops, but you could buy a different laptop or a different phone.

> But the liberation didn’t last long. If you deploy software, you probably pay rent to AWS.

There's no Azure? GCP? Hetzner? Digital Ocean?

> You pay exorbitant rents to cloud providers for their computing power because your own computer isn’t in the right place to be a decent server.

You do that because you don't know what port-forwarding is (vast majority of software people do not), or you don't have the place or infra in your dwelling to stash a laptop server running 24/7 without interruption.


You probably also have a residential IP and need to pay some service (Ngrok?) to convert that into a static address that’s useful right?


The new internet, an overlay network on top of the existing internet. Cool?


It seems less bad than the competing vision of tunneling absolutely everything over HTTPS.


At least it's not a network that revolves around middle-out compression.


What's really needed along those lines is a regular ordinary Mbone that's permanent this time:

https://en.wikipedia.org/wiki/Mbone


Did anyone else immediately do the calculation of 8.1 billion (world population 2024) * 1/20000 = 405k user base? Which makes me wonder what percentage are paying users.


Ehm, sorry no. OSes matter much as before because even if today giants want to call desktop and co "just endpoints" a politically correct variant of old dumb terminals of "their mainframes", actually we know very well that "the intelligence" must be in "endpoints" and no "mainframe alike" modern solution can scale or serve well in that regards. Of course we need networking, a network of individual hosts, not of dumb endpoints.

Devs have lost such knowledge because big tech have trained them to loose it and now we see more and more limits of their model. The new internet must be the very old one, a network of hosts communicating each others, without *NAT and alike in the middle explicitly done in most case to lock users hosts behind some giant iron curtain.

The modern web today matter because we lack UIs because commercial desktops have decided for widgets based UIs and have strongly hit their limits, finding in the modern web a crappy modern version of the old classic DocUIs and we know as well we need DocUIs. Slowly we start coming back to the end-users programming admitting that visual crap and all tentative to make programming hard on purpose led to unsustainable crapware ecosystems. Maybe in a decade spreadsheets and "calculators" will be finally dropped and Jupyter/R alike tools will have finally substituted them eventually with some LLM plugged in to help the dumb mean users. In another decade we probably will be back at LispM because try other paths to profit from users is not sustainable anymore.

The shortest this period will be the less damage we will suffer.


As I've been deliberately moving toward self-hosted computing, under my control, on my home network, I've had a feeling more and more that we're on the cusp of something transformative... For those who want it and those who care. There's an ecosystem of mostly FOSS software now designed to run on a home network and replace big, centralized, cloud providers. That software is right on the edge of being easy enough for everyone to use and for sufficient numbers of people to deploy and administer. News like Immich (to replace Google Photos) getting a major investment thanks to Louis Rossman and FUTO [1] is encouraging. The ecosystem of software you can now run on a commodity built NAS or homelab is, for me, the most exciting thing in computing since I first used the Internet in the late 90s.

The rollout and transformation, if it happens, won't look like all this stuff becoming so easy that every individual can run a server. But it is possible that every extended family will have at least one member who can run a server or administer a private network for the whole clan. And that's where tech like tailscale's offering will come in. That's where I see the author's vision being a believable moonshot:

Each extended family, and some small communities, with their own little interconnected, distributed network-citadels, behind the firewalls of which they do their computing, their sharing, and their work. Most family members won't need to understand it any more than they understand the centralized clouds they use now. And most networks won't be as well secured as a massive company can make its cloud offering, but a patchwork heterogeneity of network-citadels creates its own sort of security, and significantly lowers the value of any one "citadel" to even motivated adversaries.

[1]: https://www.youtube.com/watch?v=uyTPqxgqgjU


Totally this. I am old enough to remember LAN fun times. And writing software since 1970s at high school.

And Tailscale works for me to create my own network of phones, laptops, desktops and a remote node at DO. Works brilliantly to cross geo boundaries, borders, wifi networks(home has multiple) and seamless moving between mobile networks and wired.

Not sure will create a new internet or not but at least a new intranet where all my devices are reachable and controllable.


> But it is possible that every extended family will have at least one member who can run a server

That's as may be; but many, many people have no access to an "extended family". And extended families are not necessarily warm, safe spaces where everyone trusts everyone else; extended families are more likely to be "broken" than nuclear families.


> And extended families are not necessarily warm, safe spaces where everyone trusts everyone else; extended families are more likely to be "broken" than nuclear families.

It is a good thing to promote and advance privacy, security, and freedom to isolated, atomized individuals; but it is important for all of humanity to promote and advance those same ideals to extended families. People who have no access to an extended family will ultimately either join a different one or disappear into the mists of ages past. In 100 years, the Earth will be populated mostly by the descendants of people in extended families today, however imperfect or even broken those extended families may be. If those people today don't see privacy, security, and freedom as both possible and worthy, their descendants may not value or even possess any of those ideals.


> As I've been deliberately moving toward self-hosted computing, under my control, on my home network

Funnily enough, I was once like this but now I have deliberately moved everything to the big cloud providers as I don’t want to deal with the toil of running my own homelab anymore. This is coming from someone who used to have a FreeBSD server with ZFS disks and using jails to run various things like pf, samba, etc. Eventually things would fail and it felt like I was back at work again when all I want to do is drink a cold beer and watch YouTube.

Perhaps I will try again one day as things get easier. For now I am content with having my photos and videos automatically synced up to iCloud/Google Photos.


I tried once or twice in the early 2010s to set up a home server and had a similar experience to what you describe. Stuff would break and I wouldn't want to spend time fixing it.

I think part of the excitement I'm feeling is that the ecosystem today feels way more stable and mature than it did a decade to a decade and a half ago. Home Assistant, Jellyfin, TrueNAS, and a few other things have all pretty much run themselves for me with almost no downtime (other than one blackout that happened while I was traveling and drained my UPS) for the past nine months. There's tinkering to get it all up and running, but way less maintenance than I remember in the past.


I am curious about what your setup was, I have several systems, not sure I would call it a homelab but I rarely have to do anything. I am using Truenas for my ZFS storage and I have a few NUCs to run extra QoL services.

The only time I do anything with this stuff is when I want to upgrade (which is very rare) or add something. My NAS solution is a custom mini-ITX I built 8 years ago which I feel has more than paid for itself. I have long stopped chasing the latest and greatest because most of what has been produced in the last decade is very usable.

Very wary of going cloud, as I can't as easily control costs.


What are the hard parts of hosting from home? What solutions have you been using?


>at least one member who can run a server

It may be highly unlogical, but maybe by shooting for zero it would be possible to bat 1000?

I do everything it takes so that the "extended family" site just works after I leave, as long as the "operator" can keep track of their USB sticks.

Scrap PCs being used as media servers have no internal drive.

Boots to the stick containing the server app.

Accesses media on a second stick containing the files.

Hotplug the media stick to emulate game-cartridge/VCR-cassette convenience.

Upon server failure or massive update, replace that particular stick with a backup or later version, or in the worst case get another scrap PC.

I know, easier said than done :/


A medieval castle could be defended by surprisingly few people, but not by zero people. And a castle full of people who don't know how to fortify and maintain its defenses eventually becomes someone else's castle.

Aiming for zero required sysadmins in the short term after your own passing, I think the computers you leave behind will run into a similar case of the same general problem in the long term: there's no such thing as an entropy proof system. Castle walls erode and weapons rust if there are no skilled people to maintain them. Computer components slowly break down due to ordinary wear and tear. Software configurations become obsolete and unable to talk with other software, and become less secure as vulnerabilities are discovered over time. If there are no skilled people at all to maintain a familial network-citadel, it will eventually break down and fall into disuse.


You have hit the nail on the head.

Especially with passing, eventually it's like the siege of the Alamo, when the walls do end up breached there's not a soldier there that can do any good.

It's shoestrings anyway and amazing it's working for now :)



"layers" have been a major motif of the write up.

> We’re removing layers, and layers, and layers of complexity, and making it easier to work on what you wanted to work on in the first place.

as an avid user, i'd say they are in fact adding more layers to the problem. it is well-designed and relatively accessible, sure, but it represents a stop-gap solution while everyone eventually pushes to the eventual solution.

it has always been the double-edged nature of abstraction. we give trust and responsibility to another party while for us networking works out "magically". but the moment your remote client has some auth issues, you snap back to reality. besides bandwidth costs, it seems that their otherwise generous pricing model is economically viable in post-ai landscape.

i'd personally like to act like a "landowner" of the internet, but currently being a rentor seems like a good idea while we all wait for social housing to finally get accepted.


I used to use Wireguard. It connects a peer to a peer, but stops there. I have since replaced it with Tailscale. It goes much beyond Wireguard, and connects everything to everything. A lot of my networking problems went away. After using Taildrop for several months, I feel the post is right about it. It’s a frictionless one click peer to peer file transfer tool that is very useful. It should have been built into the internet.

The idea of entirely decentralized internet is wishful thinking. You always need servers. Even with IP6, you have to run a STUN or DDNS server, since ip addresses change. Do you want to run them at home? I don’t.

I do think Tailscale is on path to different networking.


One thing I don’t understand. The article claims that we need to pay rent to big corps like AWS, which is true only if you’re offering something on the internet (e.g., you have a saas). As a consumer I don’t pay to AWS, I only pay to my isp. Now, the article wants everyone (the ones who have something to offer, and the consumers) to switch to this new internet… so both producers and consumers (peers now) need to pay rent to tailscale (unless you selfhost, but selfhosting is like the first story tell tell in the article about asking your isp for a static ip address, opening ports and the like; self hosting is too much work).

Smells like more centralisation, not less.


As a buyer you also don't pay VAT to the government. You pay the seller a price that's marked up exactly by VAT, which the seller then pays to the government.

In summary, if you don't pay it directly doesn't mean you don't pay it indirectly.


Tailscale complaining about centralized actors controling the internet, yet not allowing to sign up for Tailscale with your email but strictly requiring to use a Microsoft/Meta/Google account. Cant make this up.


You can use just about any OIDC.

Some of the self-hosted options presented during sign up include Keycloak, Ory, Gitea, Zitadel, Authelia, and more.

There's also a workaround to create a passkey account by signing up with any SSO provider, inviting yourself as an external user, accepting that invite and sining in with a passkey, then leave the original SSO network. Then you're not tied to any external service at all.


Or, wild idea, just allow email signups like everyone else.


Agree or disagree, a tailscale co-founder responded as to why they went this path.

https://news.ycombinator.com/item?id=22760130

> (I'm a Tailscale co-founder) The idea is to avoid building yet another commercial service that holds onto your username and password. People have enough identities already. More details here: https://tailscale.com/blog/how-tailscale-works/ We know we keep getting feedback that people want a different way to authorize their accounts (especially for personal use), so we're looking at other options. We just really want to stay out of the username+password business; it's simply bad security practice.


I love this.

Except to use tailscale you do need to bring in a while OIDC authentication provider.

It's all small and aimed at avoiding scale until the very first step, when suddenly only the big complex thing is acceptable.

I still just want to just use my email and a top. The only one of the auth providers tailscale supports that I have is GitHub, but I don't use GitHub as beyond work as I self host my git.

When the onboarding is "maintain and run a full oidc provider", all we've done is trade one aspect of complexity for another.


Speaking of Tailscale, does anyone know on Windows how to prevent it significantly slowing down file transfers between peer computers on my home LAN?

I don't really understand it, I can use the direct IP address of the other machine and I can still see tailscaled.exe using a lot of CPU and my file transfer being only 65MB/s. If I right click the system tray icon and exit from Tailscale, the transfer speed instantly jumps to 109MB/s (which is the maximum my Gb/s LAN).



I'll preface this by saying: I DO appreciate tailscale and what they've done for frictionless VPNs; I use it daily. But this post has a really unfortunate tone, it comes across as really arrogant. Not ambitious, but arrogant. The notion that the population as a whole is buying tailscale because it might offer some as-yet unpublished capabilities at some non-determinant point in the future... is delusional. And tailscale's moat is very shallow - yes, there's some nifty networking stuff going on, but it's well understood and the functionality will be replicated by competitors as tailscale gains mainstream traction, however big their warchest is.


That looks like Hamachi of 20 years ago.


nordvpn is the new frosties tony


This is a 'gimme some of that Wix money', IPO/acq ramp-up post right?


When is the government going to start requiring IPv6 more aggressively?

It's that simple.


My ISP deployed ipv6 via serving ULAs to clients behind the ISP box, and doing a NAT to a single dynamic public IP.

Another one just blocks all incoming connections on ipv6 entirely.


I’m confused by this New Internet talk. Tailscale is nice and all, but it gives you virtual private networks. To talk to anything you need to first be invited to that private network. It’s more like the New LAN, great for intranet and shit. But how am I supposed to build Internet apps intended for everyone if they need to be invited to my network first?

Not to mention most Internet services do need a central backend to function even if there are no barriers among clients at all (because the clients are completely unreliable), including even the textbook p2p example of file transfer: while direct p2p is nice in many cases, with a central service the recipient can receive at any time, instead of having to coordinate with the sender to both stay online simultaneously and for the duration of the transfer, which is quite difficult nowadays with most of the computing happening on phones.


Maybe there will eventually be an Intertailnet that connects all the tailnets together (in some secure, opt-in way of course).


How is having a TLS cert considered to currently be a “have”? Seems like a deployment issue for your colo and edge presence (for those eschewing AWS).


> centralization is bad except when we do it

TL;DR


I love this blog post. It resonates so much. But I honestly don't know how to deploy applications efficiently without containerization. And where there's containers there's kubernetes. So on and so forth


I've been an active Tailscale user for years now, preaching the Gospel of Wireguard Control Planes to all who will listen (and many who won't) in both my personal and professional life.

It's been really disheartening to watch the steady enshittification of Tailscale, Inc. I knew it was coming with 100% certainty once they raised 100mil in 2022. It's still heartbreaking because the product itself is quite good.

The worst part is because Tailscale, Inc got there "first" (I know nebula existed before Tailscale did. shut up, okay?) and now the other competitors like NetMaker, NetBird, are all following almost the exact same business model ("open core+" - open source client and some kind of claim to an open source control plane with infinity caveats to funnel enterprise dollarydoos back to the vulture capitalists)


thanks for your insights.

> The worst part is because Tailscale, Inc got there "first"

never heard of nebula but please clarify where they got first.

i'm sure you are aware that branded/purposebuilt vpn's existed long before the first iphone.


not sure what you mean by "enshittification"

are you describing the process of a company achieving commercial success?


Ipv6 is for poor people, until the poors figure out something really cool you can do with it the rich will never switch


imho at this point it's a generational thing. there's just no love in v6, so it'll take over once the old guard go out of office and nobody cares enough anymore.


> I read a post recently where someone bragged about using Kubernetes to scale all the way up to 500,000 page views per month. But that’s 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep.

lmao


the new internet? i'm still on BBS. dont wanna use computers without ansi art.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: