Hacker News .hnnew | past | comments | ask | show | jobs | submit | throwaway270925's commentslogin

Also reminds me of the rollable map screen from Red Planet (2000).

As seen here: http://www.technovelgy.com/ct/Science-Fiction-News.asp?NewsN...


Do iphones not have "increase touch sensitivity" as a setting? Thats all I had to do for my dad for him to be able to easily use it again, on a samsung though.

There are also phones with buttons again, the unihertz titan 2 elite looks good btw. Or Clicks addon keyboards.


> A hard power cycle on a 3 device pool (data single, metadata DUP, DM-SMR disks) left the extent tree and free space tree in a state that no native repair path could resolve.

As a ZFS wrangler by day:

People in this thread seem to happily shit on btrfs here but this seems to be very much not like a sane, resilient configuration no matter the FS. Just something to keep in mind.


Might be true, but I don't see any aspect of that which is relevant to this event:

* Data single obviously means losing a single drive will cause data loss, but no drive was actually lost, right?

* Metadata DUP (not sure if it's across 2 disks or all 3) should be expected to be robust, I'd expect?

* I certainly eye DM-SMR disks with suspicion in general, but it doesn't sound like they were responsible for the damage: "Both DUP copies of several metadata blocks were written with inconsistent parent and child generations."


> Metadata DUP (not sure if it's across 2 disks or all 3) should be expected to be robust, I'd expect?

No. DUP will happily put both copies on the same disk. You would need to use RAID1 (or RAID1c3 for a copy on all disks) if you wanted a guarantee of the metadata being on multiple disks.


Wow, yuck. (The "Why do we even have that lever?!" line comes to mind.)

...even so, without a disk failure, that probably wasn't the cause of this event.


The DUP profile is meant for use with a single disk. The RAID* profiles are meant for use with multiple disks. Both are necessary to cover the full gamut of BTRFS use cases, but it would probably be good if mkfs.btrfs spat out a big warning if you use DUP on a multi-disk filesystem, as this is /usually/ a mistake.

ZFS has similar configurations possible (e.g. copies).

You can end up in this state with btrfs if you start with a single device (defaults to data=single,metadata=dup), and then add additional devices without changing the data/metadata profiles. Or you can choose this config explicitly.

I really wish the btrfs-progs had a --this-config-is-bad-but-continue-anyway flag since there are so many bad configurations possible (raid5/raid6, raid0/single/dup). The rescue tools are also bad and are about as likely to make the problem worse as fix it.


This should be at the top, using metadata DUP on a 3 disk volume is already asking for it, and of course you loose data when you just use it as jbod with data stored only once. Unless this are enterprise disks with capacitors anything can happen when it suddenly looses power. Not the FSes fault.

With the same configuration this can happen with ZFS, bcachefs etc just as well.


Will it render the whole filesystem inaccessible and unrepairable on those filesystems as well? One of the issues with btrfs is that it's brittle: failure tends not to cause an inconsistency in the affected part of the filesystem but bring down the whole thing. In general people are a lot more understanding of a power failure resulting in data corruption around the files that are actively being written at the time (there are limits to how much consistency can be achieved here anyway), much less so when the blast radius expands a lot further.

A few decades ago, XFS was notorious because a power failure would wipe out various files, even if they had been opened only for reading. For instance, I had seen many systems that were bricked because XFS wiped out /etc/fstab after a power failure.

Nevertheless, many, many years ago, the XFS problems have been removed and today it is very robust.

During the last few years, I have seen a great number of power failures on some computers without a UPS, where XFS was used intensively at the moment of the power failure. Despite that, in none of those cases there was any filesystem corruption whatsoever, but the worst that has ever happened was the loss of the last writes performed immediately before the power failure.

This is the behavior that is expected from any file system that claims to be journaled, even in the past many journaled file systems failed to keep their promises, e.g. a few decades ago I had seen corrupted file systems on all existing Linux file systems and also on NTFS. At that time only the FreeBSD UFS with "soft updates" was completely unaffected by any kind of power failures.

However, nowadays I would expect all these file systems to be much more mature and to have fixed any bugs long ago.

BTRFS appears to be the exception, as the stories about corruption events do not seem to diminish in time.


> Unless this are enterprise disks with capacitors anything can happen when it suddenly looses power. Not the FSes fault.

Most filesystems just get a few files/directories damaged though. ZFS is famous for handling totally crazy things like broken hardware which damages data in-transit. ext4 has no checksum, but at least fsck will drop things into lost+found directory.

The "making all data inaccessible" part is pretty unique to btrfs, and lets not pretend nothing can be done about this.


Do you not know how the news works? This is a report about a whitepaper by Googles Quantum AI. If you want to be mad about something, be mad about Google releasing this, not the newspapers reporting the press release.

It is though, even says so in the readme:

> Note: the written report is currently "vibe coded" physical and engineering analysis using various LLM-based AIs, with the author acting as a guide and sanity check and putting pieces together. The intention moving forward is to move calculations to code and simply report the results.


Ah, I do see that was added to the readme about 12 hours after my original comment. It does seem more heavily curated than any one-shot output would be.

From the Readme:

> Note: the written report is currently "vibe coded" physical and engineering analysis using various LLM-based AIs, with the author acting as a guide and sanity check and putting pieces together. The intention moving forward is to move calculations to code and simply report the results.


Oh please, keep your sorry, defeatist excuses to yourself. Every vote counts. Becoming politically active is always an option. And there are literally "No Kings" protests all over the US this weekend, even in flyover states!


1. Every vote counts but in my jurisdiction the people I would like to win generally do. I’m not _changing_ anything with my vote.

2. It’s not clear to me that these protests are effective. Do you have evidence that they are?


As someone handling dozens of OpenBSD servers and VMs at work, I dont care about copyright and licenses anymore.

Its 2026, just shut up and give us at least one modern filesystem already!


I'd love a "Windows Classic" Release!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: