Hacker News new | past | comments | ask | show | jobs | submit login
Shufflecake: Plausible deniability for hidden filesystems on Linux (2023) (iacr.org)
66 points by simonpure 68 days ago | hide | past | favorite | 23 comments




Thanks! Related:

Shufflecake Research Paper Released - https://news.ycombinator.com/item?id=37822664 - Oct 2023 (1 comment)

Shufflecake: Plausible deniability for multiple hidden filesystems on Linux - https://news.ycombinator.com/item?id=35621451 - April 2023 (1 comment)

Shufflecake: Plausible deniability for multiple hidden filesystems on Linux - https://news.ycombinator.com/item?id=33580957 - Nov 2022 (1 comment)

Shufflecake: Plausible deniability for multiple hidden filesystems on Linux - https://news.ycombinator.com/item?id=33576809 - Nov 2022 (1 comment)

Shufflecake: Plausible Deniability for Multiple Hidden Filesystems - https://news.ycombinator.com/item?id=33545393 - Nov 2022 (3 comments)


Wow, admin of the Shufflecake project here, super excited to see this on HN again! Feel free to AMA, Elia and I are busy right now working on an upcoming major release, we will give announcement and more details at the upcoming Swiss Crypto Day event in St Gallen next Monday: https://swisscryptoday.github.io/2024/

I will also reply to comments below.


The website mentions overcommitting volumes being an effective method of creating plausible deniability. How does Shufflecake manage used data in the volumes to make sure they don't clobber each other? Do you depend on TRIM requests coming from the file system, or is it just a matter of hoping you don't happen to cause corruption?

There is a further FAQ entry about TRIM that seems more related to the underlying encryption than to file systems that live on top.


Thanks for the question! Well, overcommitment is part of the plausible deniability design: basically we need that, whatever password secrecy level you provide, it looks like all the disk space is utilized, because if there is clearly unmapped space then it's clear that there is still something hidden.

Currently, we achieve this with overcommitting the volume size: if you have, say, 1 GB device, and you format it with a 3-volume Shufflecake setup, each of these 3 volumes will appear as 1 GB in size. Then the question is, what happens if you write more than 1 GB in total? Well, you will start receiving I/O errors. Volumes themselves will not get corrupted, because they will not clobber each other, but the kernel module will basically tell you "sorry, I ran out of space earlier than I expected".

A planned feature in the future, to manage this more elegantly, is to use "virtual quotas". With this system, low-secrecy volumes will have a limited size (chosen by the user), but the "topmost" unlocked volume (the "most secret one" that you have unlocked with a given password) will not have this limit: it will always appear as taking all the leftover space. See the section 6.5 "Use of Disk Metadata" in the research paper.


Can you describe what you mean by plausible deniability if in your example you can prove from any of the individual Shufflecake volumes that there's X amount of data in the others?

3-volume Shufflecake setup with on a 100 gb device. I put 10 gigabytes into #1, 20 gigabytes in #2 and so in #3, I can only store 70 gb of data before I get I/O errors, which leaks that there's 30 gigabytes of data in the other volumes.


Yes, but only if you have all three opened, i.e., you are in a "home alone scenario". Remember that Shufflecake volumes have a hierarchy, i.e. volume 1 is "less secret" than volume 2, which is "less secret" than volume 3, etc. In your example, during an interrogation, you would only open volume 1 and maybe volume 2, but not volume 3. You would see that volume 1 has 10 GB of data, volume 2 has 20 GB, and you can still write 70 GB before getting I/O errors. Nothing hints at the fact that there is a 3rd volume. Of course, in so doing, you will actually overwrite and corrupt volume 3, but this is desired behavior. That's why we recommend of always opening all volumes for the "home alone" scenario.


Thank you for that clarification.

Another question about this, presume #1 10 gb, #2 20 gb, #3 35 gb

We have #1 and #2 open, #3 is taking 50% of the 'free space' shown. Is writing data in #1 or #2 have a roughly 50% chance of destroying data in #3 or does it known mapped blocks and the overwrite only happens once the actual free amount is used up?


Roughly 50% chance of messing up with data in #3.

There is the possibility of reducing a bit this chance at the cost of wasting more space by using error-correction, e.g. with RAID, if so desired. This is explained in the paper and in the FAQ and the README.


@chungy, replying to your: > Thanks, but it wasn't so much asking about storing more than you have at once, but as file systems are used, old unused space might still be occupying space in the Shufflecake scheme. Say you have that 1GB volume with three equally-sized volumes; sure, you can write ~333MB to each of them, and then delete all the files. Now according to each of the file systems, you have 0B used and 1GB free. When you try making new files after this point, will it still generate I/O errors?

Oh, I see what you mean. In your scenario, you can still write data, because creation and deletion of files is managed by the filesystem on the volume, not by Shufflecake. So, those slices (disk space) that you have previously allocated for the volumes will now still be there, but contain empty space that is recognized as such by the filesystem and can then be overwritten.

What might be improved, instead, is the fact that once a slice is empty because you have deleted all files, that slice still exists and is still assigned to the volumes which first allocated that. It might be good to have a system that reassigns that slice to the list of "unallocated" slices and that hence be claimed in future by other volumes. We have some ideas (using TRIM, see Section 6.6 of research paper) on how to implement that but, frankly, I personally think this is not extremely crucial, because it only matters if you exploit heavily the overcommitment feature (which we'd rather limit through the use of virtual quotas, as explained).


Thanks, but it wasn't so much asking about storing more than you have at once, but as file systems are used, old unused space might still be occupying space in the Shufflecake scheme. Say you have that 1GB volume with three equally-sized volumes; sure, you can write ~333MB to each of them, and then delete all the files. Now according to each of the file systems, you have 0B used and 1GB free. When you try making new files after this point, will it still generate I/O errors?


My personal dream is "more than plausible deniability", or being able to mount an encrypted fs with many keys, one show "the basic", another "some hidden but not important stuff", another the really important stuff.

This way you have a clear face, a hidden but revealable one, to state that's why I use this tool, and no can say you have if also for something else. A part of a real deniability is proving why you have some code in your system.


Shufflecake allows this; a setup almost exactly the way you mention is on their homepage: https://shufflecake.net/

> In Shufflecake, each hidden volume is encrypted with a different secret key, scrambled across the empty space of an underlying existing storage medium, and indistinguishable from random noise when not decrypted. Even if the presence of the Shufflecake software itself cannot be hidden - and hence the presence of secret volumes is suspected - the number of volumes is also hidden. This allows a user to create a hierarchy of plausible deniability, where "most hidden" secret volumes are buried under "less hidden" decoy volumes, whose passwords can be surrendered under pressure.


Yes, 100% this.

And wait when we'll release a Shufflecake-powered, fully hidden multi-OS distro ;)


Couple of quick thoughts here.

1. Cool project, this space always felt rather empty after the mysterious ending of TrueCrypt.

2. This was made as part of a Swiss Government tech lab’s program and requires you to install kernel drivers that were written in C. So think carefully about what you are actually doing here and if that makes sense for you or not.

3. I’m still much less clear on how it prevents this famous XKCD security problem https://xkcd.com/538/ but it seems that any system that was captured with this would be immediately obvious what was going on right down to the userland CLI tool and isn’t going to save you from getting wrenched if that’s a part of your security model, if anything you’re almost certainly going to get wrenched a LOT more unless you’re able to provide them with 15 different working passwords which is the max limit of how many “layers” this thing provides.

In fact, that’s kind of at the crux of the problem here. While it seems like a great idea initially to have a system where “you can’t prove that there are any hidden files there” is actually a huge liability not the ace up your sleeve you might assume it is at first.

Because let’s say you make a public, secret and top secret layer on a device with this thing. You don’t seem to be able to prove that there was ever not an ultra top secret layer so it’s going to just be you trying to convince someone when that someone might be very violent. So if a physical interrogation is a part of your threat model I’d probably say to avoid this approach entirely as it will possibly do more harm than good.


With this reasoning, whether or not you actually have a hidden FS is not relevant, as it's unprovable. So you will be tortured with no end anyway just to make sure that there is no hidden FS. Because that you have a hidden FS will be the assumption of anyone thinking you are hiding something.


I was specifically referring to a scenario where you have a device captured with these binaries on it just to be clear.

At that point your day probably becomes a lot worse depending on who finds it and why they want to talk with you about it.


I don’t know a max 15 layers seems pretty clear to me. You’ll ask for 15 passwords and if not furnished, the wrench.


Oh, another thing. Just to be clear, the 15 password maximum is just an implementation choice to make our life easier, but we actually think it is a liability, and we are working to remove this limit completely. We do not think that anyone sane in their mind will ever need 15 levels of secrecy with Shufflecake, but that's not the point. The point is that, with a hardcoded maximum limit (regardless of the limit, even if it's 200 or 2000) there is a workaround that allows the user to do something very dangerous. By removing this limit (up to what is allowed by the disk size), we remove the possibility for the user to shoot in their foot. See the section "Safeword" in our research paper.


Hey, Shufflecake co-author here, thanks for the questions.

1. Indeed!

2. Wow wow wow, hold on there :D there is a misunderstanding, the "Swiss Government" was never involved not even remotely. First of all, this project has roots that go waaay back in time [1]. It was finally realized when I met Elia, who was then a student at a joint ETHZ/EPFL MSc program, and Elia accepted the thesis project that I proposed in the Kudelski Security Research team [2]. Kudelski Security (disclaimer: my current employer) is a traditional cybersecurity company, mainly focused on MDR, CISO-As-A-Service stuff, etc, but it's also one of the few ones which also has a dedicated fundamental research team, which I'm part of as a Principal Cryptographer. The goal of our team is not only to "invent new ways for the company to make money", but also to do "good PR", through e.g. scientific/academic publications, talks at conferences, open source software etc., all things which is what I'm personally focused on. As part of this, we offer the possibility to students to do their thesis in our team on some selected topic, and that's where Shufflecake came from.

That is, originally. As it is now, Kudelski Security is not involved in Shufflecake anymore, it's basically a pet project of Elia and myself, we pay out of our pocket for website hosting, etc, and we work on Shufflecake on our spare time. So, please feel free to contribute, we need help!

3. That is exactly the point! The "killer feature" of Shufflecake is exactly that there is no way to determine when you have finished giving up all your passwords, or there is still something undisclosed. It is true that this also means that the adversary "does not know when they can stop torturing you", but we firmly believe, both as a matter of philosophy but also pragmatically, that "plausible deniability" does not make sense if you're not willing to accept this risk.

Let me explain better, because this is a recurring questions we have.

First of all, you have to keep in mind the security model. We are not necessarily talking about Snowden-level paranoia here, it might be something more mundane, for example an investigation in a civil court because you're suspected of being in possession of illegal material, whatever "illegal" means. At least in democratic countries, you don't risk of being waterboarded for this. We know [3] that even TrueCrypt or similar systems are enough to be acquitted in some cases.

Then there are those cases where the adversary doesn't really care, they just want to find something. We recently had a discussion with a large international humanitarian organization, one of their officers told us "our agents often cross borders with laptops full of sensitive informations, in theory we are protected from searches by the UN treaty XYZ but... go to explain that to the angry Afghan guard at the airport!". Shufflecake allows to bypass this problem efficiently.

And then there are those cases where you are hiding secrets that you care about more than your own life. Let's be clear: if you are an investigative journalist in Guatemala and the Cartel kidnaps you, you're dead, period. If you're a member of a resistance group of a repressed minority in a dictatorial state and the police apprehend you, you're gonna disappear, no matter what. But, with Shufflecake, you at least have the chance maybe to resist and possibly save the life of your informants or comrades. With TrueCrypt, you don't really have this option.

Hope that clarifies, but feel free to ask more. Thanks!

[1] (in Italian) https://e-privacy.winstonsmith.org/e-privacy-X.html#i13

[2] https://research.kudelskisecurity.com/

[3] https://www.theregister.com/2010/06/28/brazil_banker_crypto_...


Thanks for the detailed reply, I really appreciate it.

I only ask that you continue to make that 3rd point clear to people who might be considering using it because although yes it is not a scenario most of us will likely ever face, what you have built is absolutely going to attract people who are at risk of torture and I don’t think it’s conscionable to put this in their hands unless they are extremely clear on the fact that with this tool they literally won’t have a way of being able to clear their name in that situation because I think that might not actually be obvious to them until it’s too late and maybe they would make other choices if they knew that ahead of time. I’d personally see it as a huge liability but everybody’s situation is different obviously.

But at a minimum I think just helping them to understand where a tool like this fits into the bigger picture and what other steps they should take because otherwise people will do really dumb shit with this because they put all of their faith in it and skipped a lot of other fundamental things that might have helped them because my experience with this topic is that they are absolutely going to get those passwords from you one way or the other and if your plan is to trick them you’re going to have a really shit time.

It may even be worth getting in touch with some folks who have been on the receiving (or giving) end of the wrench scenario to just chat with them about what you’ve built here and if it is something they think would have helped them or not.


You raise a good point, and we try our best to say that Shufflecake is not a toy, that users must be conscious, etc. I think what is missing is a proper user manual as a central source of documentation, we'll need to work on that.

But we cannot save the users from themselves. We do our best to make things easy and secure, but at the end of the day plausible deniability is one of those things that are kind of "hardcore".

To be more clear, I don't think I can see a reasonable scenario where using Shufflecake would put you in trouble but using VeraCrypt would not. I'd be happy to talk with people at the "receiving end of the wrench" (lol) and this is also part of our ongoing outreach campaign, but so far all of the cases I've seen are either "we cannot prove you're a criminal so we release you" or "even if we're convinced that you gave us all you have, we will still kill/torture you just because". For me, it's either you go plausible deniability "all-in", or you don't bother at all. And of course you're right that one needs to consider many things and adopt all sort of other precautions on top of that, but still a solution like Shufflecake is sorely missing right now.

But, yes, you are absolutely right that we must continue to put a big disclaimer for the users, and help them to understand the risk of using this.


I’m actually with you on all these points for the record and again, this is a very specific scenario that isn’t going to be relevant to most people thankfully but for those people, it’s as you said… not a toy.

It’s also realistically not on you to give them the training required to operate in that environment because a tool like this is going to only be a tiny part of it.

In my mind there is a very specific kind of individual I’m thinking of who has maybe a bit more enthusiasm than brains or experience who will reach for a tool like this if it’s positioned to them a certain way who is going to get themselves into a very preventable world of trouble unless somebody can step in to just remind them that they should be sensible and think very carefully about what exactly they are doing because things can sound great from a technical point of view but really mess you up in the real world and I think this is one of those things in that situation.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: