Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for the question! Well, overcommitment is part of the plausible deniability design: basically we need that, whatever password secrecy level you provide, it looks like all the disk space is utilized, because if there is clearly unmapped space then it's clear that there is still something hidden.

Currently, we achieve this with overcommitting the volume size: if you have, say, 1 GB device, and you format it with a 3-volume Shufflecake setup, each of these 3 volumes will appear as 1 GB in size. Then the question is, what happens if you write more than 1 GB in total? Well, you will start receiving I/O errors. Volumes themselves will not get corrupted, because they will not clobber each other, but the kernel module will basically tell you "sorry, I ran out of space earlier than I expected".

A planned feature in the future, to manage this more elegantly, is to use "virtual quotas". With this system, low-secrecy volumes will have a limited size (chosen by the user), but the "topmost" unlocked volume (the "most secret one" that you have unlocked with a given password) will not have this limit: it will always appear as taking all the leftover space. See the section 6.5 "Use of Disk Metadata" in the research paper.




Can you describe what you mean by plausible deniability if in your example you can prove from any of the individual Shufflecake volumes that there's X amount of data in the others?

3-volume Shufflecake setup with on a 100 gb device. I put 10 gigabytes into #1, 20 gigabytes in #2 and so in #3, I can only store 70 gb of data before I get I/O errors, which leaks that there's 30 gigabytes of data in the other volumes.


Yes, but only if you have all three opened, i.e., you are in a "home alone scenario". Remember that Shufflecake volumes have a hierarchy, i.e. volume 1 is "less secret" than volume 2, which is "less secret" than volume 3, etc. In your example, during an interrogation, you would only open volume 1 and maybe volume 2, but not volume 3. You would see that volume 1 has 10 GB of data, volume 2 has 20 GB, and you can still write 70 GB before getting I/O errors. Nothing hints at the fact that there is a 3rd volume. Of course, in so doing, you will actually overwrite and corrupt volume 3, but this is desired behavior. That's why we recommend of always opening all volumes for the "home alone" scenario.


Thank you for that clarification.

Another question about this, presume #1 10 gb, #2 20 gb, #3 35 gb

We have #1 and #2 open, #3 is taking 50% of the 'free space' shown. Is writing data in #1 or #2 have a roughly 50% chance of destroying data in #3 or does it known mapped blocks and the overwrite only happens once the actual free amount is used up?


Roughly 50% chance of messing up with data in #3.

There is the possibility of reducing a bit this chance at the cost of wasting more space by using error-correction, e.g. with RAID, if so desired. This is explained in the paper and in the FAQ and the README.


@chungy, replying to your: > Thanks, but it wasn't so much asking about storing more than you have at once, but as file systems are used, old unused space might still be occupying space in the Shufflecake scheme. Say you have that 1GB volume with three equally-sized volumes; sure, you can write ~333MB to each of them, and then delete all the files. Now according to each of the file systems, you have 0B used and 1GB free. When you try making new files after this point, will it still generate I/O errors?

Oh, I see what you mean. In your scenario, you can still write data, because creation and deletion of files is managed by the filesystem on the volume, not by Shufflecake. So, those slices (disk space) that you have previously allocated for the volumes will now still be there, but contain empty space that is recognized as such by the filesystem and can then be overwritten.

What might be improved, instead, is the fact that once a slice is empty because you have deleted all files, that slice still exists and is still assigned to the volumes which first allocated that. It might be good to have a system that reassigns that slice to the list of "unallocated" slices and that hence be claimed in future by other volumes. We have some ideas (using TRIM, see Section 6.6 of research paper) on how to implement that but, frankly, I personally think this is not extremely crucial, because it only matters if you exploit heavily the overcommitment feature (which we'd rather limit through the use of virtual quotas, as explained).


Thanks, but it wasn't so much asking about storing more than you have at once, but as file systems are used, old unused space might still be occupying space in the Shufflecake scheme. Say you have that 1GB volume with three equally-sized volumes; sure, you can write ~333MB to each of them, and then delete all the files. Now according to each of the file systems, you have 0B used and 1GB free. When you try making new files after this point, will it still generate I/O errors?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: