HN2new | past | comments | ask | show | jobs | submitlogin
EU citizens’ rights are under threat from anti-encryption proposals (protonmail.com)
698 points by eddieoz on Jan 28, 2021 | hide | past | favorite | 148 comments


The failure (our failure) to make the ideals and ideas of freedom in the digital age accessible and present in politics is a disaster. FOSS ideas, open culture ideas. What the world wide web actual is... It's not even that they disagree, they aren't even aware.

We spent decades fiddling arguments with one another while events just took their course. Politicians don't understand or care about any of these ideas. The public doesn't either. When "something must be done" about this or that... because digital culture progresses to some point, the deep well of ideas that dominated web culture have zero impact.

Ultimately, the encryption argument is being had between spooks and tech monopolies. Politicians are bystanders. There is no "freedom of speech" equivalent, no idea of the current technological age to guide them. No flag for the public to rally to.

The right to privacy is great, but we can't just keep arguing by analogy. We need modern age thinking to define this for modern circumstances.


I'm cynical, I think it's an impossible battle. The value being left on the table has grown year over year and that doesn't seem likely to change anytime soon. We're rapidly approaching a point where crime happens because you're either too poor for the tech or you wanted it to happen, medicine is waking up to the potential of constant biometric collection/analysis, and people are being showered in free and useful software because the the first time in history they're valuable just for existing. To contrast this, digital privacy makes everything a bit harder while protecting people from parties that just want to seek them things they'll enjoy and state actors who aren't really compelled by much more than the honor system to respect instructions to change their ways. Eventually the mountain of potential is going to simply eclipse the perceived benefits.


We have email with autocrypt to rally to.

Abandon the smart phone crap and anything with a large organization attached to it.


What you propose is just burying your head in the sand and waiting for the storm. This isn't about "large organizations" this is about the tendency for governments to sacrifice freedom for greed, control, and power. It doesn't matter if the government is a progressive European government or a Authoritarian Chinese government. They all want to more transparency into what citizens are up to "for their own good". That's what we need to fight.


> We have email with autocrypt to rally to.

For those unfamiliar:

https://autocrypt.org/

There are plans to add support for it to Tutanota[0], although ironically that blog post was written over a year ago and their FAQ still says that they don't even support interoperable PGP.[1]

When ProtonMail were asked two years ago about supporting Autocrypt, their response was quite negative but also vague[2], and there doesn't seem to have been much progress since.[3]

[0] https://tutanota.com/blog/posts/email-encryption/

[1] https://tutanota.com/faq/#pgp

[2] https://old.reddit.com/r/ProtonMail/comments/8dqk5n/protonma...

[3] https://github.com/ProtonMail/WebClient/issues/120


Baking in automatic crypto to email is a lost cause, since email is not as straight-forward as let's say Signal, which only succeeds because it exists in a monoculture (iOS/Android). Email operates on 100s of different clients (and operating systems), and you get people replying-to-all by mistake, and fat-fingering sensitive data to random recipients (which is possible in Signal, but not nearly as bad as e-mail where e-mail can exist in any hostile environment it wants, unlike Signal which has a user which is more careful about what he/she sends).


> e-mail can exist in any hostile environment it wants, unlike Signal which has a user which is more careful

Could you explain why Signal users are more careful than email users? Aren't all Signal users also email users?

I suppose the reverse isn't true, and there are machines that send people transactional emails (e.g. receipts for online purchases) which it would be nice to secure with PGP.

The real problem with securing email, from my perspective, is the difficulty of creating a UX which accurately and intuitively conveys to the user whether the message they are sending is secure (and what "secure" means). By using a separate app which never sends plaintext, that's basically a non-problem.


> Could you explain why Signal users are more careful than email users?

Sorry, I forgot to mention that phones are typically seen as more secure, and phones are the go-to operating systems that people use now, and are (usually) permanently switched on, so have to be secure since they are constantly exposed to the public Internet. (Yes, Windows can be seen as secure too, but IMHO phones are more secure. Windows is getting better over the years and have mitigated and patched a lot of the common vulns you do see).

> Aren't all Signal users also email users?

No. Email is often reached from many different OSes and environments. It is common and expected to see people logging into their Gmail from potentially compromised systems at work, or at Internet cafes. They just assume that whenever they login, the are 'secure' when in some cases the Internet cafe is logging everything or their employer has setup 'monitoring' software to ensure they are actually working and not dossing.

Signal: not so much. They have a single secure device that they use to communicate with, and since Signal is tied to a SIM: migrating your old Signal 'account' to a new SIM is impossible.


> phones are typically seen as more secure

Like most popular ideas in technology this is absolutely ridiculous and I actually laughed out loud when I read it. Most consumer phones (especially outside the US) ship with malware installed. Often worse than consumer PCs especially since even someone with little skill can install completely free OSes.

Also: Almost every popular OS has FDE as an option (often the default one) which was the main feature (other than sand boxing, which browsers do well enough) that supposedly made phones secure.


This is kind of what I mean. This is not a flag to rally behind. Maybe for nerds, not for our society. Even then it isn't a flag. It's a refuge.

This is what I mean by "failure" to get into political consciousness and politicians heads... If you want to be free you need the freedom of others. Hiding in technical bastions is like praying in secret. Freedom of religion it isn't, even if it's better than nothing.


> Abandon the smart phone crap

https://puri.sm/products/librem-5


More,

Repairable smartphones, with various choices for operating systems from brands which doesn't seem to exhibit hypocrisy when it comes their values -

https://www.fairphone.com/en/

https://www.pine64.org/pinephone/

https://www.shiftphones.com/en/

https://myteracube.com/

But when the parent said,

> Abandon the smart phone crap

It may not just be about open hardware/OS/Apps for smartphone but the hold the carrier has on the customer and therein privacy issues. This is a hard problem to solve because cellular services are oligopolies or even monopolies and amplified by lack of open modem/radio hardware.

So ditching smartphones for a portable computer might be even a good option[1].

[1]https://hackernews.hn/item?id=25917178


> from brands which doesn't seem to exhibit hypocrisy when it comes their values

This is a strong accusation. Which of those brands are recommended by the FSF [0]? Which of them are actually trying to change the industry with their investments (promoting freedom/privacy) [1]? Which of them are fighting against planned obsolescence by providing lifetime updates [2]? Given all that I am not sure who is a hypocrite.

> So ditching smartphones for a portable computer might be even a good option

Librem 5 is a portable computer [3].

[0] https://www.fsf.org/givingguide/v11/

[1] https://puri.sm/posts/breaking-ground/

[2] https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque...

[3] https://puri.sm/posts/mobile-desktop-convergence/


My comment wasn't targeted at Purism, I hoped a break after 'More,' would have helped make that point but I should have used better grammar.

Apart from that I stand by what I said.


Then whom did you accuse of hypocrisy and why?


Not OP, but common sense is telling me the answer you seek is "other companies not in the list by OP"


I have a pinephone. Once the modem manager stuff gets up streamed (so MMS works) and the distros make it easier to update the modem firmware (or they start shipping with updated firmware from the factory) these will probably be in the "just turn it on and use it" state.

There's been an insane degree of improvement in the software over the past 6 months.


I am hearing about Autocrypt for the first time. Mind filling me in on how it is different from PGP encryption?


I always say if politicians want to ban encryption being absolute (by making it insecure): you first. Lets see how well it works by implementing encraption in your systems and email and seeing if your backdoor deals and illegal activity winds up in the public eye for all to see.


Indeed! Meanwhile our government (the Netherlands) tells their employees to use Signal. Good luck telling your forces in the Middle East: "Please use this app that has breakable encryption, but it's not breakable by the bad guys, at least, we are 98% certain they can't break it. Also, we're pretty sure the master key never leaked. Yeah we know how that went for the New York subway but we are better. It's fine, don't whine, even if they break the encryption, they don't understand your Dutch conversations with your loved ones anyway. I mean, what have we got to hide anyway."


To be honest, I much prefer the French government's way, where they are transitioning to Matrix and running their own server. That way they aren't beholden to any organization other than themselves.

In general I think it would be great if the EU made it a directive* to national governments to prefer open source.

Imagine, a world where the entire EU runs on an EU-customized version of Ubuntu/Fedora, office work being done in Libreoffice, messaging done in Matrix. The support contracts would run in the hundreds of millions and be a huge boost to the improvement of said software. Not to mention some internal IT teams would probably be contributing patches for their specific use cases and bugs.

* I am aware the EU has relatively little power to enforce such a thing.


I would highly prefer that as well, in fact it would make me very happy! It gives a lot of credence to a project if a government starts to use it. It would be good if they performed transparent security audits as well. Like the Dutch the French have also rejected back-doors [0] in the past but like the Dutch, the French also sometimes say dangerous things [1].

[0] https://www.infosecurity-magazine.com/news/french-government...

[1] https://www.theregister.com/2017/02/28/german_french_ministe...


Encryption used to be classed as a military munitions, and as such was illegal to export.

It will probably revert to that status if this kind of law is put into effect, and used for governments/military with impunity but disallowed for civilians.

https://en.wikipedia.org/wiki/Export_of_cryptography_from_th...


Here in the US, I think we seriously need to capitalize on the 2A crowd. Encryption is a purely defensive weapon, like a shield. It can't actively damage or harm anyone else, but it is a critical means of protection in the modern world. We need the 2A people to understand that it both protects the privacy and security of our digital homes and lives, as well as serving as a check on the power of abusive governments, which are both of the purposes that traditional weapons under the 2A protect.


I don't get the New York subway reference. Could you give more details/link to an article?


They are probably thinking of the MBTA CharlieCard[0][1], which was cracked by MIT students. The MBTA sued them to try to keep them from presenting their research at DEFCON.

[0] https://en.wikipedia.org/wiki/CharlieCard#Security_concerns

[1] https://archive.boston.com/business/articles/2008/03/06/t_ca...


I remember reading a story here about (New York's) subway system and a copied master-key that got around. Can't find it here, but If found this: [0]

[0] https://www.nj.com/news/2010/04/master_keys_to_nyc_subway_je...


That does actually make more sense, given the OP’s comment about master keys leaking. I remembered the CharlieCard thing because it was a pretty big deal at the time, and I couldn’t find anything relevant about MTA when I searched, so assumed it was a misremembering since this all happened over a decade ago. Thanks for the link! (I wonder what the outcome was of New York’s audit…)


When I first visited the Netherlands in 2017 I observed public transportation was transitioning to plastic card only payments.

That was my first impression about the country in privacy context, not being able to use cash is a big attack on privacy.


You can still get an anonymous card [1], along with paper cards from a machine, and printed e-tickets. So I would not say that travelling anonymously is becoming impossible, but rather that the current method is a lot more convenient for most, and thus the most known one.

[1] https://www.ov-chipkaart.nl/purchase-an-ov-chipkaart/anonymo...


The card is anonymous, but loading it with money is not, at least not in practice - most machines I've encountered only accept Maestro/VPay (which is an improvement over PIN, as they are marginally available in other countries, but they are also tied to your identity), so you are left with the service points at major train stations at best. Exceptions were more common in the Strippenkaart era.

At least that was the situation by late 2019/early 2020, as I have not been able to travel to Randstad since then for rather obvious reasons.


Thanks for pointing this out. I didn't know about that. My experience was like that: one day I could buy transport card by cash, the other day the driver informed that I cannot but tickets by cash anymore.


Yeah there is that wish to go away from cash but we also had this: [0, from 2016]. I hope we will have more privacy oriented fintech companies soon, indeed it may swing the wrong way. It depends a bit on the ruling parties (we have many and they have to form a coalition.)

[0] https://www.theregister.com/2016/01/04/dutch_government_says...


> privacy oriented fintech

Can you elaborate on that term? We are talking here about digital payments, right? (where usually more than 3 parties involved in this process) How are you gonna make sure the whole transaction is not traceable back to you?


> privacy oriented fintech

this phrase is an oxymoron. Banks need to monitor, track, report activities. Central banks 'siphon' all transactions every day/week/month, so they can cross reference that Henry Bemis made in total $100m transactions today even if he used 50 different banks. A single bank can only track/monitor for AML its own clients. Central Banks can do that. Tax authorities can tap into bank's data.

Side note: There is no such thing as privacy when you use a card. The only thing that 'protects' you is that your bank won't sell your data to Facebook. But NatWest banking app DOES talk to FB when you fire it up... so... at least they don't tell FB (yet) everything you do.


Privacy != anonymity; usually.

Specific institutions, like finance, need to be able to discern your identity in some way in order to remain viable economic institutions. Provenance of monies can be a big question, and an important one, when dealing with people who spend their time in financial crime. If financial institutions could use a technology that is similar to PGP that maintains an identity without needing to reveal who you are, to me this is privacy. There's obvious exceptions to that, but generally I think it's a good idea given that one of the major ways merchandisers collect data on you is via transactions.


Banks (and similar orgs) have a term for this. It is called "KYC" Know Your Customer. That includes all your data (the ones they have, the ones you provide - e.g. address, tax/payroll records if you want to apply for a loan and they ask source of income, etc.) When you talk to a "relationship manager" be certain that (if they are any good at their job) what you tell them is recorded and stays in your file.

This has also a positive side. E.g. I never shop shoes online. So my bank called me one fine afternoon because someone used my card number to buy shoes. That store only sells women's shoes (I am a man). This was not consistent with my "profile" (of course I have one) and they cancelled the transactions, refunded the money, notified VISA and merchant, and called me to tell me that they will see the transactions on my logs (-50, -100, -150 and then +50, +100, +150).

I expect and demand that from my bank, but not from Facebook: "hey why aren't you in your typical pizza resto and you went across the street?"


This is interesting use case but has nothing to do with invasive KYC (as done by banks), the same anomaly detection and follow up could be done if you were some random identifier “Joe” with any contact detail (email, phone number, telegram handle, ...)


Yeah, I agree.


Exactly, I cannot agree more.


It will always be traceable and you can see that many fintecht companies that make things easier (like bunq and N26) also catch a lot of attention from bad guys and at the same time seem to freeze quite some accounts based on suspicious activity (I see that on the forums, I think there are a lot of false positives as well). A lot of "Whatsapp fraud money" seems to move through these companies, no wonder because with some you can get several cards activated immediately, funnel money through hopeless people's accounts into bitcoin exchanges or cash and it's gone.

Anyway, we can only hope they won't outlaw cryptocurrencies to have a glimmer of hope for anonymous payment in the future.


Oh, speaking of cryptocurrencies I recently came across with this article.

https://www.metzdowd.com/pipermail/cryptography/2020-Decembe...


Well, Zcoin, Monero and Dash being delisted from exchanges has got to mean that they are at least somewhat effective...?


The ironic part here is that the resolution pretty much suggests in points 1 & 2 that their comms will continue to be encrypted by mandate.

Like everything else, everything is subtext so you could argue that it doesn't mean this (or anything).


It's also worth mentioning: perspective.

I can't find the link, but I read an article where a tech CEO was called to Washington to work with the government for something or other.

The one thing he noticed that stuck in my mind is

In the tech sector it's about:

  what you CAN do
yet to politicians, the focus is on:

  what you CAN NOT do
So maybe part of the solution is to point out the difference.


I think the Solarwinds breach pretty much gives us the evidence we need on this score.


The only thing I don't like about the word "encraption" is that it's way too easy to misparse as "encryption" lol


"encraption" : chuckle


Put simply, the resolution is no different from the previous proposals which generated a wide backlash from privacy-conscious companies, civil society members, experts, and MEPs. The difference this time is that the Council has taken a more subtle approach and avoided explicitly using words like ‘ban’ or ‘backdoor.’ But make no mistake, this is the intention. It’s important that steps are taken now to prevent these proposals going too far and keep European’s rights to privacy intact.”


There were all too many efforts from EU to ban or at least limit encryption. They all get shot down for perfectly valid reasons only to come back after a few months in one form or another.

This implies that:

1) there is a clear agenda among the political class to go against the will of the citizens despite the strong opposition to the idea

2) it will eventually succeed because they will hide the legislation in some obscure act of law anyway

An example how shady EU council can be:

"Acta was slipped through the European Council in an agriculture and fisheries meeting in December"

https://www.wired.co.uk/article/acta-101


I do not (entirely) disagree with your comment, but I think it would be fair to add some counterweight to the evidence you provide for your point (2): Acta was ultimately rejected by the European Parliament.


I never said ACTA was passed. Just provided example of tactics employed to sneak in laws that general public strongly opposes.


Agreed. But your original point would be stronger if you could provide actual examples of passed EU legislation that was "snuck in".


Government’s attack on its own citizens are getting to a ridiculous level, this ban against encryption is obviously disliked by 99% of the population and will affect them adversely.

Government is showing that they don’t care about well being of their citizens by continuously pursuing these type of proposals.

I think people in society should drop the civil contract all together, obviously government has devolved into a tribal state where they do everything in their might to protect a select few, hence people will need to drop their contract with the current one and build a new one.


I'd take this a step further: Let's stop pretending there's some sort of "contract" in the first place.

If you weren't told the terms, if you weren't given the option to negotiate, if you weren't even informed that you were party to the contract until after you were ostensibly bound to it, it isn't a contract.

It's a diktat. You are subject to it, not party to it.

Consent matters. Always and everywhere, consent matters. It doesn't just matter in the abstract. It matters in each particular. Consenting to chip in to build a road shouldn't be assumed to be consent to chip in to start a war.

There is no "social contract." There's just subjection of people to power.


You're going to find it difficult to engage with the other side in a debate if you have absolutely no idea why the disagree with you. Or worse completely misunderstand their reasoning.

The purpose of these intrusions on E2E encryption is to fight crime and terrorism. It is misconceived and counterproductive, yes, but these people really don't intend to attack citizens in general. They're just woefully ill informed. The way to fight this is to inform them.


I miss the time in my life where I could still believe this.

The overwhelming majority of people in prison, who have had existing laws enforced against them aren't violent, even though that's what the laws they "broke" claimed to be trying to prevent when they were passed too.

"Crime" is in the eye of the powerful. "Stopping criminals" sounds fine until you realize who they consider "criminals" and how capricious that is.


It is worse than an attack on their own citizens - it is an attack on their own citizens best described as "nationally suicidal" - their finance systems depend upon end to end encryptions and they depend upon them for funding which keeps them afloat. These are complete fucking morons who would be the dog who not only caught the car but a nuclear waste truck.


That this kind of thing is on the table shows how short our memories are. After the Snowden revelations (among others), now governments expect us to believe they have pure intentions when it comes to breaking encryption?


Those stories land on the table when we’ve already lost. We have the entire state apparatus weighing all its might in favor of one side, and we’re trying to collect citizen’s opinions to show that what the EU does isn’t derived from power voluntarily granted by the peons. Sorry, by the « voting citizen ».


I live in the Netherlands. National election is coming in march. What would be the best way to do something about this as a voter?


Generally, I believe Dutch representatives in the EU have been doing their thing when it comes to arguing against encryption, so it probably needn't play a major role in your deliberations.

It might be that a 2021 edition of this website is going to be launched again though, if you want to make sure: https://www.privacystemwijzer.nl/ (And otherwise the 2019 edition might already give you enough of an idea.)


Write to each of the political parties to express your position. At least to the parliamentary parties that you would consider voting for.

The most powerful thing you can do, right now, is make the parties aware that this is important to your vote.

Try to reach a person - contacting individual politicians is best. Although, they should have a staff to deal with your writing and reply directly to your point, especially before the election. You might expect to be added to a spammy mailing list.

Bearing in mind that the national MPs do not have direct power over this within the EU, they should still have a political position.

Vote for a party which supports your position (check also: https://www.kieskompas.nl/nl/tk21/)


Find out which party makes sense to you. In the UK, there are usually "tests" where you answer questions based on what you think, and it'll show you to what percent you agree with which party.

If I'd have to guess, the Pirate Party probably supports encryption.


Given that the current political climate is "lie until you get elected", it doesn't matter. Maybe vote for someone you like personally. Hopefully they have lied about things you can live with.


How do you determine if it is lying, or just inability to achieve your goals without compromise?

It's easy to state what you want, but to achieve it when you have to collaborate with many others that have other priorities and opinions usually results in a compromise. That's not lying, that's just reality.


If every time there is compromise the opposite of what is promised is achieved (or at the very least nothing at all), then continuing to make those promises is lying.


I highly recommend reading the actual resolution at [0]. It is short and clearly written, a pdf of a few pages.

As Protonmail admits, "it's not explicitly stated in the resolution" that it "seek[s] to allow law enforcement access to encrypted platforms via backdoors". Protonmail argues, however, that this "is widely understood" to be its aim.

I am disappointed that Protonmail provides no evidence that this is really the underlying purpose of the resolution. (They do point out that previous proposals did contain such wording, but this in my view is insufficient - the explicit removal of such words can be called progress, after all!)

So what am I missing? Can someone here maybe provide evidence that this is really the actual intent of the resolution?

[0] https://www.consilium.europa.eu/en/press/press-releases/2020...


>> disappointed that Protonmail provides no evidence that this is really the underlying purpose of the proposal

Be a little generous. This is a blog post, and this is the middle of a debate that's been happening for years. That backdoors are actually what this resolves to is not a controversial position, except that advocates are trying to avoid these terms. In any case, it has been discussed heavily. Not every blog post needs to go through the motions.

Meanwhile, for evidence you can read the resolution yourself and make up your mind... It is linked in the first paragraph. It's short, but here's a paraphrase:

[1] We love encryption, human rights and all that kind of stuff.

[2] Lots of people use encryption. For really important stuff, we even make them use encryption.

[3] BUT(!) Criminals also use encryption. This makes it hard for police to read their DMs even though they're just trying doing their job...

[4] We want to keep encryption, but make it so police can read criminal's DMs.

[5] Big Tech needs to do this.

[6] Regulation something something

[7] More words, no discernible meaning.

Point 4 is the disingenuous point the post is referring to. You are disappointed in a blog post that doesn't cover everything. How about being disappointed in the Council of The European Union's formal resolution demanding something that they know to be impossible. It's not even just technically impossible. It is logically impossible.

This is not the first version, as the blog post mentions. Earlier version were unpopular because banning encryption seems like a bad idea. Even politicians seems to have noticed that. This version is demanding a ban on encryption, but also demanding that whatever weaker alternative is implemented still be called encryption so people aren't worried.

Honestly, this is hideous. Whatever the merit of their arguments, this is not the way. Everything here is subtext. Nothing is stated plainly. About 40% of the text is "reassuring you that this isn't exactly what it sounds like." Quite literally Orwellian.

Like the top comment says. Let's start with them. All members who have signed this resolution can beta test whatever "encryption that only police can break" means. We can all look forward to reading their emails.


I see that you are claiming that "evidence" is in the resolution itself - something not even Protonmail does! As is obvious from my comment, I did read the resolution and agree with Protonmail that it does not provide sufficient evidence.

More substantially, we clearly disagree about the interpretation of point [4]. I would in particular question whether it calls for a ban on end-to-end encryption - at the very least you have to accept that it does not do so explicitly! And one could, for example, interpret it as a call to force suspected criminals to give up their passwords or some such.

I would therefore also propose that your claim of "Orwellian" is only fully justified once we find an actual proposal in front of the EU parliament that bans encryption. At the moment we are rather far from this, and - without further evidence - I correspondingly think that your claim is rather far-fetched.


Where did I "claim" anything. You demanded evidence from a blog post commenting on a wide discussion, that they have been involved in for years.

There are plenty of arguments by plenty of people making this point. You can agree or disagree with that, but you can't demand that they convince you.

If I say that a free press is important to democracy, you cannot demand that I "prove it."

OTOH, If a (my) legislating body has officially resolved something, as they have, I do get to demand that they tell me what it is. Telling me that they expect big tech to implement a system where police can gain lawful access to my data without circumventing encryption is... pissing on me while telling me it's raining.

And yes, Orwellian, in the simplest and most iconic way. This is a resolution to ban encryption. Plain and simple. The resolution's language is all about how they respect privacy and encryption. This is Ministry of Love language, and it's Orwell all the way down. The only way to progress as they clearly wish is by changing the definition of the term "encrypted." More precisely, taking away the "end-2-end" caveat or some other way of encrypting while circumventing encryption.

Incidentally, instead of "police," the resolution refers to them (5 times) as "competent authorities." This is like an Orwell homage... maybe an intentional wink. If I ever write a 1984 knock-off, the police will be called CAs.


> This is a resolution to ban encryption. Plain and simple.

Yes I do know that this is your viewpoint - thank you for repeating it. But, as I also repeatedly said, I have a hard time seeing it because, well, the resolution does in fact not explicitly ban (E2E) encryption.

So can you please help convince me and fellow ignorami by providing a link or two? Is there maybe a statement by a council member? A proposal for the type of backdoor/encryption type the council would like to see implemented?

I think something like that would really move the discussion forward.


The full text of the proposed resolution is here: https://data.consilium.europa.eu/doc/document/ST-13084-2020-...

Relevant part:

"Competent authorities must be able to access data in a lawful and targeted manner,in full respect of fundamental rights and the relevant data protection laws, while upholding cybersecurity. Technical solutions for gaining access to encrypted data must comply with the principles of legality, transparency, necessity and proportionality including protection of personal data by design and by default."

So, what they are saying here is: encryption is allowed but it has to be weak or backdoored. They want "Technical solutions for gaining access to encrypted data". They do want to ban strong encryption (which is the only kind of encryption worthy of the name).


>I have a hard time seeing it because, well, the resolution does in fact not explicitly ban (E2E) encryption.

If you need the actions of the government to be explicit then you'd have a difficult time to prove any case.

Let's turn this back towards you, based on the evidence of previous actions by the government of the EU (and EU), how can you claim anything prevents this malicious behavior? Nothing explicitly prevents these abuses. This is an intentional design. Does the right to E2E exist in this resolution? It doesn't, so please don't assume they won't use a roundabout way to remove this human right.

Please don't ignore abuses of governmental power, they weld their power with little recourse. Let's not be quick to encourage their poor legislation.


You seem to have upgraded the claim that "banning encryption is implicit in the resolution" for a far broader claim that I can probably summarize as "assume malice". But yet again, all I find to support the latter claim are sweeping statements without evidence.

Can you provide me with examples of EU law that you consider abuses of power and that remove human rights?

(Also, the "EU government" does not exist, so with "governmental power" you maybe mean the power of the Commission?)


> Can you provide me with examples of EU law that you consider abuses of power and that remove human rights?

Their previous attempts to making encryption illegal. I believe encryption is a fundamental human right.

You never answered my question. You seem bogged down on my characterization.

Let me ask again without the characterization.

What EU laws ensure the human right to encryption?


> What EU laws ensure the human right to encryption?

Article 7 of the Charter of fundamental rights of the European Union: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12...

> Article 7

> Respect for private and family life

> Everyone has the right to respect for his or her private and family life, home and communications.

See also ECHR Convention for the Protection of Human Rights and Fundamental Freedoms article 8 https://www.echr.coe.int/Documents/Convention_ENG.pdf

> ARTICLE 8

> Right to respect for private and family life1.

> Everyone has the right to respect for his private and family life, his home and his correspondence.2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.


I appreciate your citations.

What court cases prove your interpretation of the law? That's the rub.


I can answer your question: unlike the great sibling comment, let me state that I do (or did) not know of such laws.

But now what? Have you proved the point that I need to "assume malice"? Or have you now successfully argued for the narrower claim, that the resolution is indeed a proposed ban on encryption?

I am very sorry but I really do not believe you have - which was, of course, the reason that I ignored your question in the first place.


> Have you proved the point that I need to "assume malice"?

Catch up to the conversation. I moved on from my characterization to repeat my question without the malice. You said 'i can answer your question', yet your entire comment is devoid of such answer.

You're Sybil'ing this issue. Only obstructing the real issue at hand.

The real issue is whether the laws (via court cases ensure interpretation is correct) says it's allowed or banned. The good faith in the government or bad faith in the government is moot. The intention/malice distracts from the actual point. Which is why I moved on from it because I'm trying to engage you in the actual issue.

Nothing explicitly protects citizens right to encryption. No law. No court case. And until that's plainly laid out, we shouldn't be offering our trust to any ruler over us. Whether their intent is positive or negative, it's entirely moot.

Whether citizens have a human right. That's the issue and you've spoken to distract from the issue. I encourage you, prove your case that EU provides this human right.


> we shouldn't be offering our trust to any ruler over us

But that is exactly the claim I tried to summarize two comments ago as "assume malice"...

> I encourage you, prove your case that EU provides this human right.

This is a basic tenet of law: it is not forbidden, hence you are free to use encryption everywhere in the EU. QED :)

You probably want to argue that in an ideal world, the right to use encryption should be written into the EU treaties. (Another option would be the ECHR, but good luck getting Russia on board!) That might be laudable, but until such is done we seem to have to (unfortunately!) give some bit of trust to the law-makers here and ask whether or not they will actually move and introduce a law that bans encryption.

For example, someone can claim that EU lawmakers want to forbid us from eating cake. As a cake-lover I would be tempted to protest, but should I not maybe first check if the claim is actually reliable? That is what I am trying to do here. If the claim is not reliable, there might be no need to call for an amendment to the treaties to ensure cake-eating remains legal.


>> (me) I encourage you, prove your case that EU provides this human right.

> (you: prof-dr-ir) This is a basic tenet of law: it is not forbidden, hence you are free to use encryption everywhere in the EU. QED :)

Interesting. You went with proof by assumption. There is the rub. If you're going to assume cognitive bias, I can't argue with your irrational belief. Instead, I'm writing for others to realize the error in this mental structure.

> (you) You probably want to argue that in an ideal world,...

Actually, on the contrary, I think you're arguing in an ideal world that those is power don't circumvent the goodwill intents of the law due to accidental reasons, or negligent reasons or even nefarious purposes (notice these circumventions to the law happen with a positive motive or negative motive. Which is why I went away from that when you got caught up in charactizations).

Assuming you have a right because it doesn't explicitly state it, is logically equivalent to your gripe with the ban of E2E.

> (you) I have a hard time seeing it because, well, the resolution does in fact not explicitly ban (E2E) encryption.

Said in reflection to your assumption: I have a hard time seeing it because, well, the resolution [nor any other EU law/resolution/court-case] does in fact explicitly state we have a right of E2E as citizens.

A side note: Since you're using assumptions for your human right. I feel you've not learned history. Laws can and should be explicit (I don't argue against your logical battle but I argue against where your focus is). A region of the world that had Hitler doesn't recognize this assumption could cause dire consequences? It's an absolutely fair point since you're focusing on the EU. (I'd quote other local atrocious leaders if we were debating a different location)

A final point, laws should be able to hold the powerful into account. So, even if it's stated in the law explicitly and even if the court cases confirm our cultural interpretations of the law, then this is still not what is necessary. We need one more aspect included. Consequences to law breaking. So my conclusion is that prof-dr-ir you're a far cry away from having all three aspects QED. I hope others realize and not take this situation lightly. Especially if we don't want to accidentally, negligently or even maliciously fall into a dystopian society.


I've been a cypher geek for ages. I've used Signal before it was popular, PGP emails even when they were no longer popular, I've been an early enthusiastic supporter of ProtonMail, I was outraged by many of the decisions taken by the US government to break the right to encryption (and, most of all, harm the security of users through backdoors), and I'm alarmed now that the EU seems willing to follow a similar path.

But I also try to walk in politicians' shoes. Security agencies have the right to monitor the traffic linked to activities that pose a threat to public security. And there's no way of saying "open a backdoor only for the bad guys".

Just like a knife can be used both to cut an avocado and murder your wife, E2E encryption can be used both to guarantee freedom of speech in authoritarian regimes or protect intellectual property and PIIs, and to guarantee an un-monitored tool for the communications of terrorist organizations.

So far the position of the tech community has largely been "our job is just to provide the tools, not to predict nor oversee the perils that they pose". And politicians are rightly frustrated with this approach. And frustration at some point inevitably turns into bad legislation, often done without consulting a tech community that has been deaf to their use-case for decades.

Is it possible to build a tool or a technology that guarantees privacy while providing tools for investigators, without opening backdoors and without compromising legitimate use cases?

I know, it's a hard question, it's not how E2E encryption was designed, but I have the impression that in all these years we haven't even tried and sit around a whiteboard to brainstorm some ideas, and we are simply shouting "but privacy...!" whenever a politician tries to boldly (and often clumsily) break the wall between them and us and implement legislation to regulate our lack of action.


>Is it possible to build a tool or a technology that guarantees privacy while providing tools for investigators, without opening backdoors and without compromising legitimate use cases?

No. The answer is in the question. You can't have privacy if a third party can read your messages. You don't need to be a "cypher geek" to understand this.

>I know, it's a hard question

It's not.


  SSdtIG5vdCBzdXJlIHdoeSB0aGlzIGlzIGV4YWN0bHkgYW4gaXNzdWUuIElmIHlvdSdyZSBnZW
  5lcmF0aW5nIGRhdGEgaW4gZW5nbGlzaCBvciBhIGxhbmd1YWdlIHVuZGVyc3Rvb2QgYnkgb3Ro
  ZXIgaHVtYW5zIChvciB0cmFuc2xhdGFibGUgYnkgb3RoZXIgaHVtYW5zKSB0aGVuIHlvdXIgZG
  F0YSBpcyBhdCByaXNrIG9mIGJlaW5nIGludGVyY2VwdGVkLgoKV2hldGhlciBkaWdpdGFsIG9y
  IGFuYWxvZ3VlLCB0aGUgYmVzdCBpZGVhIGlzIHRvIG1ha2UgYSBzeXN0ZW0gb25seSB5b3UgYW
  5kIHlvdXIgaW50ZW5kZWQgcGFydHkga25vd3MuIEluc3RlYWQgb2Ygc2VuZGluZyBnZW8gbG9j
  YXRpb24gYXMgbnVtYmVycywgZW5jcnlwdCBpdCBmaXJzdCwgdXNpbmcgYSBrZXkgeW91IGNhbi
  BiZSBzdXJlIGlzIGFzIHByaXZhdGUgYXMgcG9zc2libGUsIGFuZCBkbyBub3QgdXNlICJwbGFp
  biB0ZXh0Ii4KCkFueSBkYXRhIHlvdSBnZW5lcmF0ZSwgZXZlbiBvcmFsbHksIHRoYXQgaXNuJ3
  QgImVuY3J5cHRlZCIgeW91cnNlbGYgY2FuIGJlIHVzZWQgYXMgZXZpZGVuY2UgYWdhaW5zdCB5
  b3UuCgoyMDN1dWlvd2VmbmJ3OXJoMzg5NHI1aHk4OTM1aHVyODlvMzQKCl4gdGhhdCBtaWdodC
  BiZSBnaWJiZXJpc2gsIG9yIGFuIGluc3RydWN0aW9uIHRvIHNvbWVvbmUgc29tZXdoZXJlIGlu
  IHRoZSB3b3JsZCB0aGV5IGNhbiBkZWNpcGhlci4KCldlIG11c3QgYXNzdW1lIGFsbCBkYXRhIG
  NhbiBiZSBpbnRlcmNlcHRlZCwgaXMgYmVpbmcgaW50ZXJjZXB0ZWQgYW5kIHRvIG1ha2Ugc3lzdGVtcyBhZ2FpbnN0IHRoaXMuIA==


This is poignant ...

A properly encrypted string is indistinguishable from a random string.

A random string, of course, cannot be decrypted and you cannot know a passphrase for it.

Is there a possible, conceivable future wherein writing or speaking a random string becomes illegal ?

Here's one:

28ac5ef41de3909dee5e15194ac8712245b958ae6a690295139bc7b6c44e25ef


I agree. For example, there's the app Oversec for Android that works with most messengers and allows to send and decrypt sent messages, it stays on top of the chat app.


I had to add some newlines to your text because it was borking the page layout. Sorry, it's our bug.


How would you even do that ? I doubt any criminal organization just trust the platform for encrypting their data, they do it themselves using well known algorithms that have no backdoors. People who seriously want to encrypt will always be able to do it, you can hide data in pictures, files whatever and have a custom algorithm to reassemble it if you want. The only one who will lose is the user who do not really care, being now more weak to man in the middle attack. That is the majority with data that cause no threat.


I use ProtonMail as my daily driver for email. I believe that encrypted content with no back door keys, but having who is communicating with who, is a good compromise. I understand why intelligence agencies might need, with court authorization, to know who is communicating with each other. But, content should be absolutely private.


It would be a good compromise alright - in the sense of a blackmailed offical being very compromised. Anonymous speech is important to a free society - the secret ballot recognizes this fact well.

The intelligence agencies don't even need to exist. They exist to serve us not the other way around - if they forget it the right thing to do is to give them the old yeller treatment because they will have become a menace to all.


That sounds like a good compromise and the kind of discussion that people should be having.


> But, content should be absolutely private.

Content has never been absolutely private and has only become so with widespread end-to-end encryption. This is the crux of the problem and what governments and law enforcement are uncomfortable with.


A recent policy memo[1], while variously problematic, did mention end-to-end encryption, as a tool to mitigate risks of US folks using Chinese platforms, in its envisioned environment of an economic-bloc cold war.

> Require Chinese companies to adhere to specific technical requirements [...] Technical Restrictions [...] End-to-end encryption: Mandating the use of open source encryption protocols that limits the service provider’s access to user data. This eliminates the ability for the Chinese government to access the encrypted data.

Might it now be useful to add this to the political argument for privacy? Opposition to encryption as support for Chinese government intelligence gathering.

[1] https://hackernews.hn/item?id=25918462


Even if you believe it's possible to have secure communications with the government having key escrow (it's not), protection from the government is valid.

Yesterday was Holocaust memorial day. It's still well within living memory when a legitimately elected government tried to wipe out vast swathes of their populous because they had the audacity to be born into the wrong religion, sexual orientation, disabilities or political views.

People like Willem Arondeus are quite rightly seen as heroes. Can you imagine how much different things would have been if the Nazis were able to get not just everyone's (semi) public facebook posts but all of their private messages as well and use that for targeting of undesirables? As a more recent example, the Rwandan Genocide was massively helped by the fact your national ID card identified your ethnicity.

Whilst it's easy to say our current government would never do such a thing, we find ourselves living in a time when the far right is on the rise again and the idea that they would get elected is not beyond imagination.

edit: facebook is obviously the wrong example to use here, but there's a difference between someone being able to get a legally issued court order to see stored communications on a platform, or even what data I have on my device and being able to decrypt any communications as it transits over a wire, so I feel the point still stands.


Don't forget that there's reasonable evidence that the FBI (or one or more rogue FBI agents) tried blackmailing Dr. Martin Luther King, Jr. into committing suicide.[0]

Can you imagine if they had a magic button that would instantly open and read all of Dr. King's letters and phone calls? I don't think he would have committed suicide, but they really could and would have smeared him in the press and wrapped him up for years standing trial for one tiny infraction after another.

Did they ever figure out who turned on the lawful intercept capability on Greek phone switches to illegally wiretap Grook politicians? (I think it was Greece, in the early-to-mid 2000s.) Edit: it looks like the NSA[1], but I remember reading speculation that it was organized criminals.

[0] https://en.wikipedia.org/wiki/COINTELPRO

[1] https://en.wikipedia.org/wiki/Greek_wiretapping_case_2004%E2...


> guaranteeing the powers of law enforcement and the judiciary to operate on the same terms as in the offline world

Have there ever been prohibitions about which encryption I can use on paper letters?


So, it is very hard not to react on this. What does this means for banks? For any hacker worldwide? For any foreign government agency which is not our? What does it mean for any type of communication that by default has to have secure communication? What are the exactly "ends"? Encryption between car and key, between airplane and tower?

And if this is only between two persons, who will enforce this? Criminals will continue to use it, so who is the target here exactly?


> What are the exactly "ends"?

You're right, they don't care about you talking securely to your bank or any other company that they can trust to keep logs of the plaintext.

It might be less disingenuous for governments to say that they are actually against Citizen-to-Citizen encryption.


I am more and more certain, with the political ideas like these, I will welcome our AI overloads.


EU: “big tech is stealing your data. Don’t worry, we’ll fight for your rights and protect you “

Also EU: “big tech is encrypting your data, so we cannot look at it when we feel like it. Don’t worry, we’ll make it open, so we can check on you“

Out of those two, I’m much more comfortable with big tech having control over my data. Sure, they’ll use it to make more money, but they won’t use it to put me in jail.


Lets not choose between evils. These aren't really overlapping anyway. If big tech has access to the data, then authorities can demand it anyway.

In any case, this isn't just about using your data to make more money anymore. It's becoming increasingly about power, so it's barely even two seperate evils. Also, government(s) are a customer for the data that big tech is "stealing."


In real world you often have to choose between evils. It's utopian slippery slope, yes, but in a current world I prefer to give big corps more power over my data than governments.


In current reality, these are not oppositional in any way. In fact, it's easier to stand on the right (or wrong) side of both.

If big tech have access to our data, it's insecure and authorities will have access too. If encryption is banned, our data will be more insecure and both will have more access.

There really is no need to choose.


Ok, a bit of a hijack here but it's looking like encryption is going to be backdoored! All govs seem to be gunning for it.

So, what is the solution? Having my own public/private keys that I sign everything with?


It's like trust the govt, but not Google. I don't trust either.


is this similar to the Australian encryption rules that was imposed in the recent years ?


When I see government trying to backdoor or key-escrow encryption, it reminds me that it wasn't so long ago (1800s) that many governments tried to use law to set the value of 𝛑.


To be fair, if they'd actually somehow achieved that (changing it to something nice and round like 3, or 10), it would have made mental maths a lot easier.


what if you combine encryption with steganography? How can they even prove you have been using encryption ?


They will never stop. It's the EU salami tactics, they proceed "slice by slice" until you realize it and then it's usually too late. I wish they would devote so much time to actual problems and make EU again a big leader (e.g. look at the speed we are getting vaccines now).


This isn't 'the EU'. It's the Council, i.e. the national governments.


What a bizarre claim. The Council is the heart of power in the EU. It is there more than anywhere else that the fundamental political decisions of the bloc are made. The EU is more like the nineteenth-century Concert of Europe than you might realise.


Yes, but that's what the enemies of European integration wanted. We could have had a proper constitution for years now with a hugely strengthened Parliament and a directly elected president but some people didn't want that. ¯\_(ツ)_/¯


1) That's irrelevant to the question at hand: that the Council is an integral part of the EU, and so the actions of the Council can appropriately be called actions of the 'EU'

2) It's not especially helpful to frame what is practically possible in politics simply in terms of what people 'want'. Regardless of what you want, there are pretty hard legal, economic and political obstacles to democratising the EU that have to be taken seriously in any strategic assessment of the situation


The enemies of integration didnt want this. They wanted the eu gone wholesale.


How do you define the EU exactly?


The issue is that "the EU" is not a homogenous block and so the parliament and/or commission can be against this. In this scenario it's national governments working trough the EU so some will call that EU and others not.

These same differences are a thing in other very important scenarios like during the Greek financial crisis where a lot of people in parliament or the commission might've seen that the short term pain is what needed to happen or even before that urged to get a banking union but the ECB is dominated by it's national counterparts who all tried to lose the least and thus lose more as a whole. Which can make it all seem very schizophrenic and makes it so you can claim it's good or bad depending on which voices to listen to. At the end of the day tho it's mostly the national governments that held the rains there but here the parliament can simply block this.


Additionally, the Council sets the general direction, but the Commission is the body that would actually need to propose legislation, and the Parliament then needs to approve it, AFAIK.

The Council (i.e. the leaders of the national governments) meet at least four times a year, whereas the Commission is more akin to a regular government (but for the EU) and the Parliament with a regular parliament (but for the EU).


The EU is a union of countries. It's the entire body including all its citizens.

There's several governing bodies within the EU and people should understand the political process within it and be specific. How often do you see Americans blaming the USA for some new legislation?


I have the feeling that some EU citizens aren't exactly aware that THEY are the EU, and listen instead to antagonizing outsider stories (Sputnik, whatever you name it). There were EU elections last year and the measures taken today are drafted by those people elected one year ago. Same for the council members - they were voted in the respective national elections. Next time folks pay better attention when you vote. Or whether you vote.


The resolution was made by the EU Council, not the parliament.

The EU Council is not voted upon as such, and it consists of the heads of state/government from all member states.

Once this reaches the parliament, on the other hand, there will be a lot of concerned citizens contacting members of parliament in order to voice their opinion.


You must agree that the heads of state/governments get there following democratic elections, thus directly influenced by the voters of respective countries. In Europe at least. So the same popular influence is still there, just via a different channel.


Definitely. However, depending on where you live, the head of state/government may very well have gotten ~20% of the local votes, and their politics may therefore be without majority backing.

The EU parliament is a different story though, and if your representatives (assuming they got enough votes to get in, of course), can be held directly accountable for how they cast their votes.

The prime minister/president, on the other hand, might not have gotten your vote to begin with.


I'm not sure I get your point. S/he definitely got somebody's vote to get there, if not mine then of the other folks across the street.


I was replying to a post saying that EU voters should be more careful about whom they vote for. I'm saying that the EU voters aren't under direct control over how the council is formed, and so it's more important to apply pressure to MEP's in the European Parliament rather than trying to do something about the council. It's the parliament's job after all...


It's similar to how Russia moves its border in some countries https://www.youtube.com/watch?v=Uie3Nfecs9k

Little by little


This discussion is tiring. That initial resolution was overblown as well.

> However, the resolution makes a fundamental misunderstanding: encryption is an absolute. Data is either encrypted or it isn’t; users have privacy, or they don’t.

Well, but as always, it isn't. Or are they encrypting all customer data with the same key?

Are provisions in the US good enough? But then you have the NSA, etc.

Privacy is a right on the legal sense. Should anything be unaccesible to law? (in the legal and in the technological sense, but they're two separate issues). This is more of an ethical question than a legal/technological one.

But back to the initial statement: no phone or system is 100% secure.

A "backdoor" implies a secret (to both user and provider) extraction of the data. Now, a judicially authorized/vetted extraction of data of a specific customer/timeframe is a different thing (even better, make it forward/backward secret). Sure, it is alarming and certainly ethically debatable. But it is not "a backdoor"


> A "backdoor" implies a secret (to both user and provider) extraction of the data. Now, a judicially authorized/vetted extraction of data of a specific customer/timeframe is a different thing (even better, make it forward/backward secret). Sure, it is alarming and certainly ethically debatable. But it is not "a backdoor"

And how do you implement one without the other?


I don't see what you mean. The first one is much easier than the second one (and the second one does not need to be implemented like the first one).

Remember, if you have access to the server or the end client you can do either one. That's why the legality supersedes the technology, because the technology is not perfect.


In society there are always competing rights and duties to be balanced. Rights very rarely absolute.

In this case the right to privacy has to be balanced with the need in society for the law to be enforced and police investigations to be carried out.

There are never been "end-to-end" unbreakable privacy. People have a right to privacy so their private communications are kept private but at the same time the police has always been able to access those communications should it be necessary and according to the law. Claiming that an 'intact' right to privacy means an absolute right to absolute privacy is simply not how it has ever been.

Strong end-to-end encryption essentially means that the police cannot access communications ever, even if they get a warrant because they don't have a technical mean to do so. That's quite reasonably something that is deemed a problem. How do we deal with this?


Maths means you can't deal with this. Sorry. You only get two choices:

1. Secure legal communication that no-one can break. 2. A ban on strong non-backdoored encryption, where the backdoor keys will be misused and leaked to criminals. Meanwhile, criminals can just keep using secure illegal communication that no-one can break.

Maths means that you don't get to choose a position in-between these two choices.


Maths actually doesn't agree with you.

It is possible to construct end-to-end strongly encrypted systems with a backdoor that is governed by a strong social consensus process.

For example, Shamir secret sharing to split the backdoor key among a group of parties that you respect in aggregate. If, say, 50% of those parties agree that you've been a particularly awful criminal they might vote that your nefarious chat should be opened up for the authorities to examine, by contributing their key fragments.

All sorts of social consensus protocols can be built, with arbitrary rules. Probably most of them aren't such a good idea, but math does allow it.

These days we would probably use a blockchain and smart contracts to provide very strong barriers against leakage. Imagine the key fragments locked inside a smart contract that only gives out the key when N anonymous decision makers concur that the key should be given out, as well as deciding to whom it should be given. In some zero-knowledge protocols it would not even be possible to find out who voted, only the result.

It is also possible, in ideal maths world, to design systems where an AI trawls through private conversations but can only reveal information if certain conditions are detected. That sounds a bit dark, but if the conditions are also governed by social consensus processes perhaps that isn't so bad. For example if the AI is instructed (by social consensus, not authoritarians) "only extract a network of conversations if they show a clear pattern of $ParticularlyAwfulCrime, otherwise leave people to their privacy" perhaps that isn't so bad. We don't have the technology to do that now, but we may get it eventually; math is not the obstacle.


> they might vote that your nefarious chat should be opened up

You are implicitly assuming that this is the only way that the chats can be decrypted. This is only true until the keys are compromised. And the keys will be compromised, because:

1. The keys are worth a lot to the right people.

2. The parties which have the keys have no real incentive to secure the keys well.

3. There is no way for an party with a key to become aware that the key has been compromised.

1, 2, and 3 will combine into a state of the world in which the keys are always compromised.


I think 2 and 3 are incorrect under my scheme. If parties host key fragments themselves, unencrypted, then I'd agree with both 2 and 3.

However if the keys are encrypted in such a way that parties can only decrypt them by posting confirmation to a blockchain (or equivalent), two things occur:

2a. The parties have a very strong incentive to secure their transaction-posting-keys, because those are the same keys as they use to protect their own other valuable assets such as cryptocurrency, or whatever else they are securing these days such as DAO governance. Some people will be sloppy (and lose their own money and other things), but on a large scale, perhaps it's an open question so far how many people will be that sloppy. If it's not too many people, the system is not compromised.

3a. Any party will become aware their own transaction-posting-key is used as soon as they see a transaction under their identity posted to the blockchain they participate in. They can also see how it's used.

I've said blockchain but it doesn't actually need a blockchain. If there is one, it can be private or public. Either way, 2a and 3a apply.

The main thing is there is an agreed "consensus location" into which people can choose whether to pool enough information to reconstitute the originally protected conversation key, with their only unencrypted secret being their multi-use key they are strongly incentivised to guard (2a), and to which they will only post if they can detect when it's used and limit how it's used (3a).


Those do sound like reasonable design decisions, but in practice the systems which are eventually actually implemented are basically never as well-designed as that.


> 2a. The parties have a very strong incentive to secure their transaction-posting-keys, because those are the same keys as they use to protect their own other valuable assets such as cryptocurrency, or whatever else they are securing these days such as DAO governance.

If you were to be (s)elected as one such holder of an encrypted key fragment, what's to say you're going to use the same key (the one your key fragment is encrypted with) for your personal Bitcoin account? I wouldn't; I'd get another one for that. Wouldn't you? If you'd use the same: Why?


That has so many wrong assumptions. First off the assumption that there is a group trusted in aggregate. If I trusted them they would have been party to it in the first place!

Second the logistics of it are "clever dick" as opposed to a real solution - like suggesting using one off Hoffman encoding to compress a large file to only 1 bit.

Third - blockchain to protect against leakage? That is the exact opposite of its job. Buzzwords aren't magic spells.


Your #1: Your trust is irrelevant when discussing whether mnw21cam's list is complete, as your trust was not part of the criteria.

(That said, on trust "If I trusted them they would have been party to it in the first place!" doesn't make sense. There's a huge difference between wanting to share all your content with a group of people all the time, versus trusting those people to make a collective decision to release your documents, for example when they agree that you have died. People are seriously examining this sort of mechanism now because it's relevant to modern life. For example releasing your password and accounts store to family or trusted friends upon your death or incapacitation, using some kind of distributed dead man's switch that needs human judgement to confirm.)

The fact is, mnw21cam's statement that there are only two possible branches because of maths, depends on the assumption that keys "will be leaked" being inevitable.

A sibling commenter believes it is inevitable they will be leaked no matter how sophisticated an aggregation mechanism is used. That is a reasonable argument, though one I disagree with.

If you have a threat model strong enough to break the distributed consensus mechanism of things like Ethereum, then you have a threat model that invalidates branch 1 in mnw21cam's list as well as branch 2, so you cannot win: Under that model, you cannot have "secure legal communication that no-one can break" because your own device is vulnerable to compromise as well. You should find a way to talk without a recording device.

Your #2: What I've said is a real logical possibility, and in fact is what we might actually end up with in a number of areas of life. It is not as 'clever dick nonsense' as you think. It might be an undesirable idea, but it is a technically possible one.

Your #3: Yes blockchains store and publicise. They also implement strong distributed consensus, and on top of those other things are layered, such as Ethereum-style smart contracts, and zero-knowledge calculations. If you think those cannot be used to control the release of fragmented or encrypted secrets driven by measurements of human decisions, you haven't understood them yet. I really recommend you look at ZKProof.org, and have a think about how privacy-maintaining blockchain coins like Zcash and Monero are able to use a public blockchain to exchange secret transactions.

Note: I have no stake in this, I'm not a cryptocurrency or blockchain fan particularly. But I do understand how they work, and I'm not fooled by the buzzwords (at all, I'm quite skeptical).


Some interesting thoughts here, but the issue of the key leaking is still a problem with your proposed protocol. After the first time all these trusted parties come together and reveal the key, someone still has to actually take the key and decrypt the relevant message. After that point, the key exists in plaintext, and it will be very difficult to ensure that it remains secret. Same problem with giving the key only to an AI: The AI needs the key, and it will be difficult to ensure that the AI system isn't hacked, especially if it's the kind of large distributed system that would be required to process everyone's messages.

What you're looking for is a protocol where N parties have to agree in order to decrypt any given message, and agreement to decrypt a particular message doesn't allow them to decrypt any other messages. Here's one that might accomplish that (disclaimer: I am not a cryptographer):

- People can use whatever end to end encryption scheme they like to send their messages, but they must Shamir-split the key into N parts, and send those parts to the relevant authorities.

- We can check that people are obeying this protocol without violating their privacy by the following method: Every time someone sends a message, they xor it with a string of random bits. Then they send it in 2 parts: the xored message, and the original string of random bits. The original message can only be reconstructed from both parts. Each part is sent with a separate key. The N authorities randomly choose one of the two parts to open. They then check that it decrypts properly (messages will include a hash of the contents, so that this is easily checkable, and difficult to fake).

- Of course, people may not trust the authorities to only decrypt one of their message pair. The solution to this is that the list of N authorities is always the same, except for the last one: this is a choice server. People can use whatever choice server they like. However, all choice servers are required to be auditable by both the government and the public. It's the choice server's job to (1) randomly choose 1 message from each pair to decrypt, (2) not release the Shamir-fragment for the key to the other message unless given a warrant by a judge.

Of course, this will be inconvenient, imperfect, and add a heck of a lot of overhead. Some choice servers will be found to be corrupt either in favour of the intelligence agencies, or the criminals, having managed to hide their corruptness from audits.

It also will not stop determined criminals from using their own encryption. It's not all that easy to tell when people are sending encrypted messages to each other, the messages will just look like random bits. There are plenty of places in perfectly innocuous seeming messages to hide random bits. They could even be hidden as noise added to a particular image. A random-bit-hiding arms race is one that cipher-users are inevitably going to win.


> the issue of the key leaking is still a problem with your proposed protocol

Fair enough. I intended that it's not a long term key, but something more appropriate like a session key.

Or better, more like a "query key" that limits what can be extracted from a session to whatever has been approved to be extracted. (See "zero-knowledge database".[1])

> What you're looking for is a protocol where N parties have to agree in order to decrypt any given message, and agreement to decrypt a particular message doesn't allow them to decrypt any other messages

That's what I meant, yes. Sorry for not making that clear.

> Same problem with giving the key only to an AI: The AI needs the key, and it will be difficult to ensure that the AI system isn't hacked

Ah... The other thing I didn't make clear is that the AI runs inside homomorphic encryption[2], or other protective bubble against access (one can imagine a quantum state with this property). This is why I said we don't have the technology to do it yet. Not because of the AI, but because we don't have sufficiently powerful methods to run an AI (or any large program) inside a bubble that prevents them from being inspected. But we know it's possible in principle.

[1] https://www.cs.jhu.edu/~susan/600.641/scribes/lecture15.pdf

[2] https://en.wikipedia.org/wiki/Homomorphic_encryption


Given the existence of end-to-end strong encrypted group chat systems, it is trivial to construct something between your #1 and #2. Simply use end-to-end strong encrypted group chat and require that when a user creates a new chat room that they invite a government agent into the room.


This is simply not true. Conversations on private premises have never been legitimate targets of surveillance, and it's not generally* been illegal to encode or encrypt mail or other private papers or communications. Mandatory back doors are an entirely novel and highly intrusive encroachment on civil liberties.

Edited from "has never been"


> ...it's never been illegal to encode or encrypt mail or other private papers or communications

In what country? France, for example, used to restrict the use of strong encryption. Example summary: http://www.opengroup.org/security/meetings/apr98/French-Encr...


This seems to be about hardware/software that performs encryption? I've yet to see any modern democratic government prevent someone from using a one-time pad, a piece of paper and a pencil to encrypt a message.


It has pretty much always been illegal to encrypt Telex as far as I know. Not that telex matters much any more, does anyone still use it?


Before all these electronic gadgets it was pretty absolute. Just go to some remote place with noone around and talk.


>Strong end-to-end encryption essentially means that the police cannot access communications ever, even if they get a warrant because they don't have a technical mean to do so.

This isn't true. It means they cannot access communications without revealing to at least one of the communicating parties that they are doing so. The messages are still decrypted on the endpoint devices for reading, a warrant could be used to acquire those devices from their owners. Device encryption can complicate this some of the time, but that remains fairly hit-and-miss (and is a separate issue, you could have mandatory key disclosure without backdoored E2E and vice versa).

>the police has always been able to access those communications should it be necessary and according to the law.

And neither is this. To find out the contents of a verbal conversation that was not recorded (most of them, even today) the police would need to ask one of the participants to tell them what happened, and the participants can usually refuse to self-incriminate. Even if the conversation was recorded the recording would usually be in the possession of one of the participants as recording a conversation while not being a participant is illegal, at least where I live. To find out the contents of a letter they would need to intercept it during transit or recover it from its destination at a later date. If the letter has been destroyed they're back to asking the recipient or sender to tell them what it said who again can usually refuse to testify. To find out what was said via instant messaging under this no end-to-end encryption scheme, they can, at any time, simply ask a third party who cannot refuse to testify. This is an utterly unprecedented invasion of privacy and I struggle to see anything that could justify it in a (part of the) world where crime seems to be on a downward trend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: