HN2new | past | comments | ask | show | jobs | submit | ghshephard's commentslogin

mDisc is an optical format designed, and tested for 100+ years of storage, can be read from a consumer dvd player and cost <$10 a disc.

LTO9 is like 45TB for <$100 (I got a bunch for €55 a piece), so 4.5TB for <$10 is being generous. And even if you didn't think they lasted 30-40 years and made copies every 3 years, it's still cheaper, not to mention you have fewer tapes to manage.

Also: I don't have a bd/dvd player in my house today, so even if there are the most tremendous gains in medical sciences I'm almost certainly not going to have one in 100+ years, so I'm not sure m disc even makes cost-sense for smaller volumes.

Maybe if you want to keep your data outside for sunshine like the author of the article, but that's not me...


LTO-9 tapes are actually 18TB, but yes they are a lot cheaper than optical discs. If you can afford the drive.

> so even if there are the most tremendous gains in medical sciences I'm almost certainly not going to have one in 100+ years

Never say never. People of today are building "90s entertainment center" setups for nostalgia, complete with VCRs. Given how many generations of game consoles had DVD drives (or BD drives that supported DVDs) in them, I would fully expect the "retro gaming" market of 100 years from now to be offering devices that can play a DVD.


> Also: I don't have a bd/dvd player in my house today

You have just stumbled on the inherent problem with any archival media.

You really think you will have a working tape drive after 40 years?

Hell, in my experience tape drives are mechanically complex and full of super thin plastic wear surfaces. Do you really expect to have a working tape drive in 10 years?

As far as I can tell there is no good way to do long term static digital archives, And in the absence of that you have to depend on dynamic archives, transfer to new media every 5 years.

I think to have realistic long term static archives the best method is to only depend on the mark 1 eyeball. find your 100 best pictures, and print them out. identify important data and print it out. Stuff you want to leave to future generations, make sure it is in a form they can read.


I do think LTO is a common enough format, and explicitly designed to be backwards-compatible, that it is very likely to be around in 10 years. The companies that rely on it wouldn't invest in it if they didn't think the hardware would be available. 40 years, harder to say, but as someone who owns a fair bit of working tape equipment (cassette, VHS, DV) that is almost all 25+ years old, i wouldn't think it'd be impossible.

That said, i imagine optical drives will be much the same.


It is only backwards compatible two generations, occasionally something slips at the LTO trust (or wherever those things are designed) and you get three generations. But if I have a basement full of LTO1 tapes no currently manufactured drive will read them. I would have to buy a used drive and the drives were never really made all that well. Better than the DAT drives one company I worked for used for some of their backups. But still mechanically very complex with many many small delicate plastic parts that wear out quickly. Those DAT drives were super delicate and also suffered from the same generational problems LTO does. We had a bunch of DAT1 tapes somebody wanted data from but had no working drives to do so. All our working drives were newer DAT3 and 4

That was always the hard part to justifying tape backup. the storage is cheap. but the drives are very expensive. And never seemed to last as long as their price would warrant.


That also changed somehow... LTO-10 drives are not backward compatible and can only read/write LTO-10 media.

That is because LTO-10 had to make an incompatible change to go from 18TB to 30TB

For LTO tapes? Yes they will be available since the format is so common.

LTO9 is only 18TB.

The LTO compression ratio is theoretical and most peoples data will be incompatible with native LTO compression method used.


3 years is way overkill. 10 years is more reasonable.

It's still a standard ish format though and not designed from the start for archival

Apparently mini discs use a different burning method (obviously) and are very very stable.


IIRC there exist "magneto-optical" disks and drives for PCs that use a similar technology, but they were niche even when that technology was current.

Lot of places that I see AI disrupting - I'm not buying that SaaS is going to be a significant one.

Reading through the article:

> They were paying $30,000 to a popular tool3

Couple things we needed to understand here:

  - How large is the client company
  - Is that $30,000/month or day or hour....
If it's a technology company of > 1000 employees - then $30,000 month doesn't even get Finance's attention. And there is next to zero chance that anyone is going to vibe-code, deploy, support and run anything in a 1000 person+ company for $30,000 a month. SaaS wins hands down.

Any product/service that people care about comes with a pager rotation - which is 6-7 employees making > $200k/year. If you can offload that responsibility to a SaaS for < $1mmm/year - done deal.


Yeh but in a company of 100 employees for software of 30k a year, it's more than worth it to take your standard 50k (GBP) dev and have them replaced it. It's a one time cost, and the support time will certainly be less than 50% of their time every year so it saves money.

There are many companies that operate like this all over the world. Outside of the hyper-growth tech VC world cutting costs is a very real target and given how cheap Devs are outside of America it's almost always worth it.


$30k/year? For 100 Employees. So - $25/seat?

I can't imagine it would ever be worth, under any scenario, trying to write/build/support any $25/seat SaaS software for any company I've worked at in 25+ years.

Another thing to keep in mind - very little of the cost of a SaaS license is the time it takes to build the software. Security, Support, Maintenance, Administration, backups/restores, testing/auditing said backups/restores, etc, etc.. and then x-training new SREs on how to support/manage this software, ...

Even as someone who spend 10+ hours a day churning out endless LLM applications, products, architectures from my myriad of Cursor/Codex/CC interfaces and agents - I'm dubious that LLMs will ever eat into SaaS revenue.

I'm sure (lots of) people will try - and then 1-2 years in someone will look at the pain, and just pull the ripcord.


I'm intrigued - as tmux has been my window manager for my desktop for 10+ years now ( I typically have 80-100 different windows/panes in play by the end of any given week, where I take time to close down all sessions that aren't still in progress).

I'm wondering what the difference is between this and just tmux basic environment - which already has a lot of pane / window management. What's the key distinction between using tmux and dwm.tmux?

<5 minutes later> - Ah - this is just tmux with some custom config. The window manager is tmux - I would suggest changing the title a bit - maybe something like, "DWM.TMUX - dwm inspired tmux configs. "

<Further review - note the "10 years ago" timestamp - ahh.. This has been gestating for a while>


I think the key distinction is the consistent layout (main pane + stack) along with keyboard shortcuts to manage. To me it's similar to running vanilla X{11,org} vs using a window manager (hence the name). A vanilla configuration will work just fine but sometimes a constrained or opinionated environment gets more out of your way and better fits your preferred workflow.

If you already have a robust tmux workflow with a desired layout (or lack of layout) and custom keyboard shortcuts then this may not work for you. It's just one way to manage panes/windows in tmux that I hadn't seen before and different from the usual ad hoc methods.

Like most window managers, I think it's all preference. What're your current preferences for pane layout, window management, etc? Do you always create/layout panes in the same way or is it situationally dependent?


It's not just configs though, as there is some logic implemented via shell that could not be handled entirely in configs. "Window Manager" was chosen as it the logic imposes a specific layout without necessarily preventing you from using other configuration options. It's almost solely layout management and keyboard shortcuts to assist.


For Floating Panes - see: https://github.com/lloydbond/tmux-floating-terminal/tree/mas... (if it doesn't work for you on first try - check - https://github.com/lloydbond/tmux-floating-terminal/pull/6)

Love Floating Panes in Tmux - and best part - all the other plugins - resurrect, continuum, etc..) all support floating panes out of the box.


This does have a single floating pane shortcut (in the current directory), using the tmux `display-popup` command.


Just to nitpick a bit. What people typically mean when they say "IPV4 NAT" is Network and Port translation. My 192.168.0.1 internally becomes 172.217.12.100 and my port gets converted to something that is tracked so that the return packet can find it's target.

In IPv6, Prefix-Translation is similar, in that the /64 prefix is translated 1:1 - but the /64 Host address is (in my experience) left alone - so that renumber a network becomes trivial when you change ISPs - you just just change the prefix.

I don't actually know if "IPv4 NAT" behavior even exists in the IPv6 world, except in the form of a lab experiment.


From my understanding, the "IPv4 NAT" equivalent for IPv6 is generally referred to as NAT66 (NPTv6 for Prefix-Translation). For example, Fortinet offers this on their firewalls, and I believe most firewall vendors have this option.


What they're saying is NAT66 on Fortigates is 1:1 NAT, i.e. prefix translation, not n:1 NAPT, i.e. address+port translation.

I can't imagine why one would ever intend to use NAPT over NAT when the addresses were available though (e.g. on IPv4 where having a minimum of 2^64 public addresses per connection is not assumed), which is the only reason I wouldn't expect anyone to have bothered implementing it. So sure, it's what people refer to on IPv4, but it's not materially different from 1:1 NAT or necessarily adding any additional value.


You can do the many-to-few (or one) NAT behavior with port rewrites in IPv6 if you want to, there are just few circumstances it makes any sense.

FWIW the broad IPv6 network-prefix NAT behavior ALSO EXISTS in IPv4, it's just less applicable.


This is the first thing that as a Network Engineer I was taught - and every formal security class I've taken (typically from Cisco - they have awesome course) - repeats the same thing.

I believe the common knowledge is somewhat more nuanced than people would have you believe

I present to you two separate high-value targets whose IP address has leaked:

  IPv4 Target: 192.168.0.1
  IPv6 Target: 2001:1868:209:FFFD:0013:50FF:FE12:3456
Target #1 has an additional level of security in that you need to figure out how to route to that IP address, and heck - who it even belongs to.

Target #2 gives aways 90% of the game at attacking it (we even leak some device specific information, so you know precisely where it's weak points are)

Also - while IPv6 lacks NAT, it certainly has a very effective Prefix-translation mechanism which is the best of both worlds:

Here is a real world target:

  FDC2:1045:3216:0001:0013:50FF:FE12:3456
You are going to have a tough time routing to it - but it can transparently access anything on the internet - either natively or through a Prefix-translation target should you wish to go that direction.


For your example, shouldn't you either present two "private" IP addresses, in which case you'd replace the IPv6 address in your example with what is likely to be an autoconfigured link-local address (though any ULA address would be valid as well),

OR present the two IP addresses that the targets would be visible as from the outside, in which case you'd replace the IPv4 address with the "public" address that 192.168.0.1 NATs to, going outbound?

Then, the stated difference is much less stark: In the first case, you'd have a local IPv6 address that's about as useless as the local IPv4 address (except that it's much more likely to be unique, but you still wouldn't know how to reach it). In the second case, unless your target is behind some massive IPv4 NAT (carrier-grade NAT probably), you'd immediately know how to route to them as well.

But presenting a local IP for IPv4, and a global one for IPv6, strikes me as a bit unfair. It would be equally bogus to present the public IPv4 address and the autoconfigured link-local address for IPv6 and asking the same question.

I do concede that carrier-grade NAT shifts the outcome again here. But it comes with all the disadvantages that carrier-grade NAT comes with, i.e. the complete inability to receive any inbound connections without NAT piercing, and you could achieve the same by just doing carrier-grade NAT for IPv6 as well (only that I don't think we want that, just how we only want IPv4 CGNAT because we don't have many other options any more).


In these contexts - neither of the addresses was intended for internet consumption. A misconfigured firewall exposes you in the case of IPv6 routable addresses, and is less relevant in the case of IPv4; the ULA IPv6 address is roughly the same as an RFC 1918 address with it's lack of routing on the Internet.

The point I was (poorly) trying to make is that non-routability is sometimes an explicit design objective (See NERC-CIP guidance for whether you should route control traffic outside of substations), and that there is some consideration that should be made when deciding whether to use globally routable IPv6 addresses.


No, that's the whole point.

Imagine I've shared output of "ifconfig" on my machine, or "netstat" output, or logs for some network service which listed local addresses.

For IPv4, this will is totally fine and leaks minimal information. For IPv6, it'll be a global, routable address.


That's a pretty weird threat model. Like, yeah commands you run on your machine can expose information about that machine.


Only in IPv6 world... in IPv4, it's all safe


Nope, iproute can still show your Mac address. And a curl ipinfo.io can show your public v4 address.


Mac address is absolutely safe in IPv4 world - the only info it gives is the network card manufacturer.

And people don't usually share "curl ipinfo.io" output unless they plan to share their external IP (unlike "ifconfig" output, which is one of the first things you want to share for any sort of networking problems)


See the top comment in this thread:

Target #2 [IPv6] gives aways 90% of the game at attacking it (we even leak some device specific information, so you know precisely where it's weak points are)

You may not consider Mac address to he important, but the context of this conversation did bring it up. Of course they forgot the fact that most v6 addressing doesn't expose Mac addresses anymore.


Especially as if someone is able to capture ifconfig data, they can probably send a curl request to a malicious web server and expose the NAT IP as well.


Just because you can think of scenarios where the IPv4 setup doesn't make a different doesn't discount that there are scenarios where it does.

Someone being able to observer some state is a different model from someone being able to perform actions on the system and the former has many more realistic scenarios in addition to the ones of the latter.


People post their ifconfig data all the time, example: https://forums.linuxmint.com/viewtopic.php?t=402315


Or if you happened to curl ipinfo

Or if you had a script that did that and put the public v4 address in your taskbar.


> Or if you had a script that did that and put the public v4 address in your taskbar.

do people still do that? Dynamic DNS is offered by so many providers now...


I'm not sure I buy the "you get a leak of the address of a high value target you believe can be routed to over the internet in some fashion, but it's the internal address which leaked and you have no idea who could own said high value target either" story.

I agree if it's an actual concern then you can use NAT66 to hide the prefix, I just don't see how this achieves security when the only publicly accessible attack point is supposed to be the internet attached FW doing the translation of the public addresses in the first place.

Additionally, if that really is the leaked IPv6 address then it's formatted as a temporary one which would have expired. If you mean static services which were supposed to be inbound allowed then we're back at the "the attack point is however the internet edge exposes inbound in both cases, not the internal address".


NAT66 doesn't add much in the way of security here, because the external address is fully routable and maps 1:1 to the internal address. You are once again fully dependent on a correctly configured firewall.

The IPv6 address that I shared was, in fact, a static (and real) IPv6 address, belonging to a real device - with the possible exception of the last 3 bytes, was likely one I worked on frequently.

Put another way - to do an apples to apples comparison:

  Hard to attack:   FDC2:1045:3216:0001:0013:50FF:FE12:3456
  Easier to attack: 2001:1868:209:FFFD:0013:50FF:FE12:3456


> NAT66 doesn't add much in the way of security here, because the external address is fully routable and maps 1:1 to the internal address. You are once again fully dependent on a correctly configured firewall.

When using the stateful firewall provided by Linux's packet filter, the IPv6 NAT66 "masquerade" works very similar to IPv4 NAT. 1:1 mapping is NOT required.

For example internal hosts are configured as follows:

inet6 fd00::200/64 scope global noprefixroute

ip -6 route add default via fd00::1

Edit: From my understanding the NAT66 is ambiguous and it may work as a stateful port-based translation similar to IPv4 NAT, whereas NPTv6 is a stateless prefix-only translation.


Hardest to attack:

fcab:cdef:1234:5678:9abc:def0:1234:5678

The whole point is that your devices on the inside of your network can't be routed to at all.


It's the same difficulty to attack in all 3 cases: hack the internet firewall, which the only point providing connectivity between both internal and external addresses regardless of what the address itself is.

You don't need to change the prefix to prevent an address from being routed to from the internet, but you do need a firewall if you want an address to be securely reachable from the internet. If you don't want an address to be reachable, what the address is whatsoever doesn't matter so long as you've implemented any possible way of making it unreachable.


Not true, 2001:1868:209:FFFD:0013:50FF:FE12:3456 provides some amount of geographic information about the target that the other addresses do not. No firewall is going to protect you from that. Of course that is only going to matter in the specific scenario where your internal IP is leaked but the attacker has not other way of getting your external IP.


Okay - I'll bite - Why is FC/7 harder to attack than FD/8?


It took me less than 1 second to access that 192.168.0.1 address! It wasn't that hard to find.

(;-)


Fast, too, isn't it? Must be on at least a 1Gbps connection.


Deeply ironic that Cisco would teach this, because it's the opposite of what they said when they introduced NAT.


Well - I can't say they have always said this - but at least for Circa 1998 CCNP onwards that's been their position. The instructors were very adamant - to the point that I'm recalling this 27+ years later.


This probably has more to do with network engineers (and CCNP instructors) not being security engineers (or even conversant with Cisco's security SBU).


If the IP address was leaked, wouldn't it be the address of the unit doing the NAT translation instead of the standard-gateway?


In the case of IPv4 - you almost certainly would get the external IP address of the unit doing NAT translation. In the case of IPv6 - it's quite common (outside of the enterprise world) for the Native IPv6 address of the device to be routed directly onto the internet - desirable even.

In the case of a 'leaked" address - there are all sorts of ways in which internal details of an address can leak even when it's not in the DST/SRC envelope of the packet on the Internet.


NAT66 is evil but it is a necessary evil. There are certainly situations where using NAT66 is the best way.


Yup, by default a Linux based router won’t forward any traffic to a IPv6 host unless you explicitly have a program running which keeps on telling the kernel you want that.


I'm sorry, this is just an elaborate argument of obscurity-as-security. You're clinging to privacy as though it were security, in stark avoidance of Kerckhoffs's principle.


> You're clinging to privacy as though it were security, in stark avoidance of Kerckhoffs's principle.

TIL that IPv6 is a cryptosystem


You can use Shannon's maxim instead if you're going to be deliberately obtuse. The point is true for any system intended to be secure, and a network is such a system, as is the security software such as the claimed NAT software.

Or do you really want to argue that Linux Netfilter/nftables or BSD pf being open source is a security problem?


I have the SAMSUNG 49" Odyssey Neo G9 G95NA - but despite spending literally dozens of hours - I was never able to get text to work clearly on it - either Mac or PC - tried both the DisplayPort, HDMI - tried all the (many) HDMI cables I had at home, and a couple expensive Monoprice cables, firmware updates, monitor resets, every setting I could figure - no luck. Text is just ... fuzzy in a way that it isn't with any other monitor I've ever owned - kind of a deal breaker when I spend all day in tmux.


Is it an oled display? It’s likely that the subpixel rendering does not match the physical display, so the subcolors are in the wrong location making the text worse instead of better (in the case of macs Apple entirely removed subpixel rendering some years back so there is no solution whatsoever, Apple on standard density always looks like shit).


I've gone the other direction - and after having struggled with other various monitors (the worst is easily the SAMSUNG 49" Odyssey Neo G9 G95NA - both cruddy capability (should have noted before buying it has no Power Delivery) as well as easily some of the blurriest text ever) - I've decided I will only ever buy Dell Monitors. Every one I've purchased (5 of them) in the last 15sh years has been a flawless performer - no hardware failures either.

Every monitor on every desk at work (around 3000 desks) is a Dell U3821DW - no broadscale systemic complaints that I've ever heard of.

I'm currently using my 4K 27" Dell P2715Q that I bought for $400 back in December 2017, and I've carried (physically) with me from office to office from Michigan to the Bay Area - thing runs for 10+ hours a day (minus weekend) for 8 years running. Eventually it's going to have to give in- and when it does - definitely going to buy another Dell (probably the U2725QE 27" 4K)


Yap I can confirm, Dell's P (professional) and U (ultra) lines are excellent and work flawlessly. The S (standard?) line not so much.


I'm kinda the same. I have a Dell monitor and a Gigabyte monitor side by side and my mac constantly loses the connection to the gigabyte monitor. At least once per day I have to unplug my video link to the gigabyte monitor to get the mac to rediscover it, this never happens with the dell one.


One of the major problems with this theory is that "cup" doesn't have any standard definition - and measuring scoops marked as "1 cup" - can be anywhere (ignoring outliers) from 240, 236.6 or 227 ml. So - ignoring the fact that when you scoop flour - the same scooped "cup" can vary by as much as 10-15%, the cup itself may be off by 6%. And you are never quite sure which cup the original recipe maker was using.

This is why any half-ways sane baker works off a scale.


And? Recipes might end up needing 1/3 more total flour just depending on the season, why should I care about how accurate it is to some kitchen separated by geography, time, and ingredients? If it doesn't taste right/feel right/look right, you'll know, and then you fix it.


It has an optional battery. This could be pretty epic for a glasses interface.


So a real cyberdeck then? (Case's Ono-Sendai was a plain slab with a keyboard and interface for the "trodes" that communicated directly with your brain.)


You'd want a TKL, not a 105 keys at the very least if you were interested by portability.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: