I hope that's the case, and I didn't read the legal text, but how would you protect yourself against that? Would we have to keep tamper-proof screenshots (or archive.is-like snapshots) for each external link posted?
Screenshots are terrible proof. They were however used in a Swedish piracy court case many years ago. That led to someone building the "proof machine"[1] which created screenshots of BitTorrent clients with any IP you entered. Try it:
Cryptographic, digital notaries. You could do screenshots and/or HTML pages. It's how I've always done it when I thought what's on a page would be significant and need proof later on.
EDIT: There's also newer ones that pull the site for you like a MITM to do what we're saying. Startups come and go, though, so I recommend scripted or manual scraping combined with long-running notary who will have evidence when trial happens. :)
While we could implement a system to have tamper proof verification of stuff that's been archived, how do you prove that the archive (snapshot) is what was actually at the target at creation time and hasn't been modified before archival?
I'm thinking in purely technical terms here, and that I can easily argue that such evidence is inadmissible, unless there's a certain number of independent, trusted snapshot services that would archive each link too, and allow one to compare the decrypted archive. To achieve that, one would probably want to use an indirection for linking so that such archives do not have to archive everything that exists, but just those that go through said indirection.
Then, the last question is how many independents you need to put trust in the verification, considering that motivated attackers could circumvent, say, the 3 most used archiving services. Or you could bind the archival of a link in a federated number snapshot services to a consensus algorithm. Then you'd ask yourself how many networks (which consist of multiple services) you'd trust.
> Each client receives different content (at some level).
Exactly, and now imagine how easy it is for someone motivated and with the right resources to meddle with that.
Similarly, it's hard to prove a machine was used by its user to consciously navigate to an illegal link. There's too much automatism on the web and on networked machines, plus if we consider link shorteners, there are too many explanations against that would not allow admitting evidence like machine-x-owned-by-mike-miller-accessed-mp3forfree-artist-album-2016.pdf.
It's the equivalent of me walking in a street where I didn't know drugs or weapons or humans were sold, and being arrested for just walking through there. In real life you have some clues where not to go, especially if you're from the town, but on the internet it's too easy to have your machine load a random illegal link or search alarming terms on Google without your consent.
You publish the hash chains with signature in places you can't retract. They use New York Times for instance. Far as their sigs, they can use one or more HSM's. One or more atomic clocks plus high-quality NTP for timing. Open-source client checks all of it right after it happens, maybe keeping local copy. Every other aspect is just standard INFOSEC.
I dont have time to answer the last question right now. Off to work. :(
But the last one is the most interesting to finally be able to trust the results. It's the same problem as faced in distributed systems, but in, say, NoSQL databases I haven't heard anyone consider the possibility of 4 of 6 members of a quorum to be undermined, and how to detect that.
Yes, but how do you conclude without a doubt that what was hashed hasn't been tampered, leading a site to believe it's legal or malware-free content, while it actually isn't, and you've just been fed a shadow version of the real thing. We can reach for consensus algorithms, but ultimately without something like Van Jacobson's Content Centric Networking, it will be very hard to prove with enough confidence for a judge to make a ruling.
Sure, and if the quorum consists of 6 members, and you managed to undermind 3 of them by making sure they're fed the same custom shadow content, you'll have a split.
Read now. The ruling acknowledges the target is mutable, but it also says that the poster of a link on a page that's for-profit must check the target content. However, I cannot find anything how one should prove it was legal at time of linking. The only sensible thing I can find is that once notified a link has to be taken down. Did I miss something?
Perhaps it would work the other way around? The prosecutor has to positively prove that when you established that link, it was pointing directly at illegal content (and based on the context of your linking, that you knew it was illegal).