I prefer SSL termination on the client-side (my computer, my data only).
I like to have the ability to view my SSL traffic in plain text.
Should the user be able to see what her computer is sending out? I think she should. And encrypted traffic should not be some special exception.
Installing someone else's "MITM" software to decrypt SSL seems unnecessary.
It is much simpler to generate and install your own "fake" certificates that you control.
stunnel is one option.
There are others. socat, Pound, etc.
It should be the user who has the final decision over which certificates to trust. Users are the real "Certificate Authorities". They should have full control over encryption and decryption should they want to exercise it.
Is it wise to irrevocably delegate the decision to trust/not trust to website owners and browser authors? Perhaps those promoting solutions like "TACK" should give this more thought.
> It should be the user who has the final decision over which certificates to trust. Users are the real "Certificate Authorities". They should have full control over encryption and decryption should they want to exercise it.
And they do.
I'm not sure what you're getting at here. You and I and my mother all have the ability to edit the root CA certificates on our computers and add our own, if we wish.
But I'm seeing more and more authentication information being incorporated ("baked in", pre-installed, whatever) into browsers, whether it is lists of "valid" TLD's, certificates for "approved" CA's, or chosen individual website certificates.
Personally, I think this information should be cleanly separated from the software that may use it rather than pre-installed and "hidden from the user".
Schneier.com Has Moved
As of March 3rd, Schneier.com has moved to a new server. If you've used a
hosts file to map www.schneier.com to a fixed IP address, you'll need to
either update the IP to 204.11.247.93, or remove the line. Otherwise,
either your software or your name server is hanging on to old DNS
information much longer than it should.
Ok, how should I "authenticate" that the site at the new address is the "real" one?
It sounds like you are describing an "implementation problem" (i.e., OpenSSL's code sucks).
But then you suggest this could be a reason to throw out the notion of "ciphersuite flexibility".
Aren't these two separate things?
Perhaps the flexibility is good.
Maybe the problem is one of complexity and quality control.
Too many ciphers, and incoporating ones of dubious quality.
I still haven't seen anyone mention the other SSL libraries, e.g.,
axssl,
polarssl,
matrixssl,
etc.
As for CA "infrastructure", what if the user uses OpenSSL's ca function?
She creates her own CA certificate and key and installs it on her device.
Then she downloads a website's certificate, signs it and installs it on her device.
Regardless of whether a wesbite has a paid-for certificate from a commercial "CA authority", she needs to make the final decision whether or not to trust it.
The user is the ultimate arbiter of which website certificates she wants to sign and install. (Not browser authors.)
Websites just need a central repository to publish their certificates.
They already do this for their "domain names" by having them published in a publicly accessible zone file (ideally, the user can download the zone file, as well as query it piecemeal over a network).
We as users trust that these zone files are accurate: specifically, we assume the IP addresses for the website's nameservers are correct.
Prior to TLS 1.2, all of the mainstream ciphersuites are bad.
The fact that you have to upgrade to TLS 1.2, which includes more than just new ciphersuites, somewhat counterfeits the idea that the ciphersuite mechanism provided much protection.
Ultimately, the protocol might have been just as well off by defining a single ciphersuite and accepting that a break in that ciphersuite would necessitate a protocol update.
"The fact is that no programmer is good enough to write code whic is free from such vulnerabilities."
"...you are kidding yourself if you think you can handle this better than the OpenSSL team."
Well, I can think of at least one example that counters this supposition. As someone points out elsewhere in this thread, BIND is like OpenSSL. And others wrote better alternatives, one of which offered a cash reward for any security holes and has afaik never had a major security flaw.
What baffles me is that no matter how bad OpenSSL is shown to be, it will not shake some programmmers' faith in it.
I wonder if the commercial CA's will see a rise in the sale of certificates because of this.
Sloppy programmer blames language for his mistakes. News at 11.
"But the book doesn't mention either technique..."
As other commenters are pointing out, the book's aim is to
teach imperative programming with C to students of SML. It is not aimed at teaching functional programming in C.
The technique you describe -- using what UNIX offers, e.g., pipe and fork -- is used a lot by djb.
This book seems like a gentle intro to C.
And there's nothing wrong with keeping things simple in the beginning.
With respect to books and tutorials on C, I have seen much
worse.
Master the basics of C, then go read djb's code for lessons
on how to structure programs and smartly utilise what UNIX
has to offer. Keep K&R and Stevens nearby for reference.
A classic example is passing file descriptors instead of using a pipe function that opens a shell. There is no book on C that teaches that, but it is elegant programming indeed.
Even when one strives to keep things simple, C (and UNIX) have many gotchas.
That section on linked lists (cons in c and sml) was really nice. I don't think I've ever seen such clear, plain c for working with singly linked lists (which even checks the return code for malloc!).
I actually had to look up what K&R had on it, and there's a section on tree-structures -- but it ends up being cluttered (although that's justifiable in K&R -- the focus isn't just on simple example code).
I like to have the ability to view my SSL traffic in plain text.
Should the user be able to see what her computer is sending out? I think she should. And encrypted traffic should not be some special exception.
Installing someone else's "MITM" software to decrypt SSL seems unnecessary.
It is much simpler to generate and install your own "fake" certificates that you control.
stunnel is one option.
There are others. socat, Pound, etc.
It should be the user who has the final decision over which certificates to trust. Users are the real "Certificate Authorities". They should have full control over encryption and decryption should they want to exercise it.
Is it wise to irrevocably delegate the decision to trust/not trust to website owners and browser authors? Perhaps those promoting solutions like "TACK" should give this more thought.