I'm as strong a supporter of a citizen's right to encryption as anyone, but I actually think that Mr. Comey's testimony was accurate and forthright, and his framing of the issue of encryption as it relates to the capabilities of law enforcement was appropriate and well-reasoned.
"He [the President] shall have Power, by and with the Advice and Consent of the Senate, to make Treaties, provided two thirds of the Senators present concur;"
I think that your claim that the only role of the Senate is that it "must be notified in the event the President has made a sole-executive agreement" is a misrepresentation. In fact, the Senate must vote to approve every potential treaty before it becomes valid.
I disagree very strongly with your purported equivalence between Karunamon's argument and that of the NSA.
My primary problem with the NSA is that they (a) use their position of power to install backdoors in hardware/software/infrastructure, (b) do it in secret, (c) claim to be in some way "above the law" w.r.t. secret courts, (d) use public taxpayer money, and (e) motivate their behaviour using bogus anti-terror claims. None of these infractions exist in the "right to be forgotten" scenario.
Fundamentally, in the story of the NSA, it is the NSA that is in a position of power that can and will be abused. In this case, it is the power to scrub the entire internet of information by imposing centralized censorship which, in my mind, is simply too much power to put in one place.
The equivalence is in the rhetorical justification, not the methodology. In both cases, an appeal to security is used to trivialize the importance of privacy.
For the NSA, terrorism > privacy every time. For Karunamon, "censorship" > privacy every time. In both cases, the mechanism is an appeal to security in order to avoid a more nuanced discussion of the issue.
> to scrub the entire internet of information
And this is where the rubber hits the road. "Any possible risk of <bad thing here> means privacy doesn't matter". That's the sort of crap reasoning used by the NSA.
The fact is that the risk of what you're describing is incredibly low, and the good of the law outweighs even a high-magnitude abuse because effectively abusing this law in that manner is highly unlikely to succeed.
This is what I mean. After rejecting the "stop <bad thing here> at all costs!" logic, we can do an actual cost-benefit analysis and take into account the extremely low probability of <bad thing here> actually happening.
The information is still there, but private citizens don't need to pay hundreds/thousands to SEO firms in order to get embarrassing gossip sites off the first page of Google results.
Let's be clear: the dichotomy in this case is between lazy reporter's and SEO firm's right to profit, and an individual citizen's right to privacy. Since I view lazy reporters and SEO as parasites anyways, the choice is clear for me.
Reframing this issue as a "security from dictatorial abuse of power" is playing the same game as the NSA. And worse, it's disingenuous. If the mechanism is abused, Streisand effect always solves back. But there's no reason to go around harming private citizens just because "censorship!"
Not at all! I and other comment authors on this thread have mentioned that because of the nature of both the law and the internet, abuse is highly unlikely.
This means that the 100% certain probability that people are currently marginally hurt by search-engine-magnified sleazy journalism far outweighs the almost-negligible probability of high-magnitude abuse of the law.
Other threads on this story have also dealt with other mitigating factors which decrease the probability of abuse, such as the fact that public figures are excluded.
I'm not trivializing censorship; I'm entering a cost-benefit analysis where the importance of an impact is a function of both its probability and the magnitude of the impact, instead of attributing certainty to every outcome and reasoning on impact alone.
Incidentally even the US Supreme Court -- a country where free speech and anti-censorship are (constitutionally) paramount -- does this sort of analysis. Libel, inciting violence, and a slew of other exceptions to free speech have been upheld by that court.
What I'm doing is considering the overall welfare of society, instead of picking and choosing individual issues that always much come first no matter what. In this case, there's a clear and present good done by the law and the chances of bad things happening are minimized by safeguards.
edit: my comments are sometimes living documents. Mostly tweaking of the last 2 paragraphs.
> In this case, it is the power to scrub the entire internet of information by imposing centralized censorship
That is a bafflingly bad reading of what has actually happened.
Nothing has been scrubbed from the Internet. Some articles are harder to find if you include a person's name - but you can still find those articles with that search term if you use google.com with ncr.
> Part of the appeal of libraries is abstracting away implementation details. I want to be able to have foo.h #include a hash table, or a binary search tree, depending on #ifdef's or its internal implementation. The user of foo.h shouldn't even have to know about it.
I agree with your premise (re: abstraction of implementation details), but I think you've applied it incorrectly to reach the wrong conclusion.
If the implementation of foo (contained in foo.c) uses a hash table or binary search tree, then foo.c may of course include whatever it likes. On the other hand, foo.h is (by definition) the interface to foo; any headers the interface requires (e.g. for function argument types) ought to be explicitly understood by the user of the library.
It depends on whether you delegate all memory allocations to the C file.
In the C code I work on, dynamic memory allocations are considered expensive and problematic (fragmentation, runtime errors) so we prefer to move the responsibility for allocation as far up the call-stack as possible, where it is often possible to minimize or coalesce dynamic allocations.
To do this, the interface files must expose the correct sizeof() so that the caller can allocate. We do this by splitting the header file into foo.h and foo_private.h (foo.h #includes foo_private.h). This separation is not for enforcement, but for API clarity and convention (if it's in a private.h, you have no business assuming anything about it).
This forces us to rebuild when the sizes of private structures change (which can be a significant downside) but allows us to avoid dynamic allocations, their costs and their many potential error paths.
In this approach, foo_private.h may very well #include a hash_table or whatever header which should not* be exposed to the #includer.
The real WTF (that you, me, and the OP probably agree on) is the California initiative process, which forces/allows citizens to act as legislators.
But to your point, the first substantive link off the official Prop. 30 summary page (http://voterguide.sos.ca.gov/propositions/30/analysis.htm) makes clear in several places that the increases will take effect starting Jan. 1, 2012. ("Because the rate increase would apply as of January 1, 2012, affected taxpayers likely would have to make larger payments in the coming months to account for the full-year effect of the rate increase.")
If the OP was worried about taxes, he should feel lucky that 38 failed ;-).
"The real WTF (that you, me, and the OP probably agree on) is the California initiative process, which forces/allows citizens to act as legislators."
I definitely agree with this (-:
You're right, in that a reasonably informed voter should have known the nature of the proposition. Unfortunately, due to the broken initiative process, those weighing the options tend not to be sufficiently informed. In my opinion (though not that of the US Supreme Court, see Calder v Bull), such ex-post-facto tax laws should be as prohibited as similar laws are in criminal circumstances.
We're continuing to digress, but I have to imagine that the framers of the (generally successful) US constitution must be having a laugh at the expense of the framers of the (less successful) California constitution regarding the initiative process.
Just a correction: the code is actually perfectly portable. The integer constant 0 is the canonical definition of the null pointer by definition in the standard (See Section 6.2.2.3 "Pointers" in C89). The null pointer constant (NULL) is defined primarily for convenience (so a reader knows you mean a null pointer instead of a arithmetic zero). Of course, the bitwise representation of the null pointer need not be all-bits-zero; that is, NULL = (void )0 != ((int *)&0).
I stand corrected, it does define the integer constant 0 to be promoted to the null pointer. In C11 this is Section 6.3.2.3. To make it even more confusing Section 7.19 Common Definitions <stddef.h> defines NULL to be an implementation depefined null pointer constant.
I was always taught that (void *)0 is a valid definition of NULL, but that 0 was not necessarily.
Just a note - all radiation levels reported from around the plant are measured in milli(m)/micro(u)-sieverts(Sv) per hour. For comparison, the worldwide average background radiation level is approx. 0.274 uSv/h, and the usa recommended continuous occupational limit is 5.71 uSv/h. For example, the highest level I've seen reported is 400 mSv/h, or 70000x the recommended limit. Info from wikipedia.
I've seen lots of things reported, but I'm always leery of things that have gone through two or three languages to get to us. We've all played that translation party where things get translated into gibberish. I'm going to have to take it with a grain of salt when people get their news from the French embassy (which, last I knew, was not known for nuclear expertise, but feel free to prove me wrong).
Of course this isn't a feature of the language -- but most implementations (e.g. gcc) technically allow this, if you really, really know what you're doing. For example:
Writing code that writes code, where that written code is executable at machine speeds rather than interpreted, is a qualitative difference over a simple "eval" implementation; and it can most definitely be a feature of the language.
A language which supports it as a feature may do things like partial evaluation of closures, inlining of runtime-constructed code, transforming e.g. the typical block-passing idiom of a Ruby into specific optimized cases. Being able to rely on such transformations in turn lets you build more abstract (i.e. more highly parameterized) libraries because you know you won't have to pay the same degree of abstraction tax.
Yes, you can generate machine code yourself, but then you have to write machine code ...
I'm suggesting a JIT compiler as a built-in language feature so that you could write code in a high-level language and have machine code generated that was specialized based on the program's dynamic state.
It could be done as a library, but it would benefit from language integration so that you wouldn't have to write the code as strings or ASTs.