The method is called `CoherentAllocation::into_parts`. And it takes self by value, which means the struct is consumed (this is implied by the into_ prefix). Of course you would either reconstruct the allocation or free the allocation yourself after this.
See Vec::into_raw_parts or Box::into_raw for an stdlib analogy.
Yeah, no. This is a coup and they are all in. They would not be this blatant about taking control illegally and fast if they expected to leave any institutions to still enforce the law against them.
Looks like the Steam team moved to control spawning and do execvpe.
I would like to see at least in-process environment modification discouraged. Rust is dealing with the issue by considering getenv unsafe when coming through C, but getting rid of the read side is much harder than the write side.
It is possible to check for setenv/unsetenv/putenv with nm -D, and a quick sample of my ~/.cargo/bin/* shows far too many programs using those. Yeah they could be single threaded, but who can guarantee they will remain so? Come to think of it listing symbols could detect pthread_create as well.
I'd be interested in a way to do static binary analysis to get from those symbols to a call tree, as well.
I don't see a way to check for **environ usage though, the compiler could turn this one into anything.
I've had my own bad experiences with Btrfs (it doesn't behave well when close to full), and my intuition is that Facebook's use of it is in a limited operational domain. It works well for their use case (uploaded media I think?), combined with the way they manage and provision clusters. Letting random users loose on it uncovers a variety of failure modes and fixes are slow to come.
On the other hand, while I haven't used it for /, dipping my toes in bcachefs with recoverable data has been a pleasant experience. Compression, encryption, checksumming, deduplication, easy filesystem resizing, SSD acceleration, ease of adding devices… it's good to have it all in one place.
> my intuition is that Facebook's use of it is in a limited operational domain
That's not really true: it's deployed across a wide variety of workloads. Not databases, obviously, but reliability concerns have nothing to do with that.
My point isn't "they use it, it must be good": that's silly. My point is that they employ multiple full time engineers dedicated to finding and fixing the bugs in upstream Linux, and because of that, BTRFS is more well tested in practice than anything else out there today.
It doesn't matter how well thought out or "elegant" bcachefs or ZFS are: they don't have a team of full time engineers with access to thousands upon thousands of machines running the filesystem actively fixing bugs. That's what actually matters.
> Compression, encryption, checksumming, deduplication, easy filesystem resizing, SSD acceleration, ease of adding devices... it's good to have it all in one place.
Self healing is dangerous because it can potentially corrupt good data on disk, if RAM or other system component is flaky.
Repro: supposedly only good copy is copied to ram, ram corrupts bit, crc is recalculated using corrupted but, corrupted copy is written back to disk(s).
Why would it need to recalculate the CRC? The correct CRC (or other hash) for the data is already stored in the metadata trees; it's how it discovered that the data was corrupted in the first place. If it writes back corrupted data, it will be detected as corrupted again the next time.
See Vec::into_raw_parts or Box::into_raw for an stdlib analogy.