At least in the early 2000s, Bloomberg had strict requirements about this. Their financial terminal has a ton of math calculations. The requirement was that they always had live servers running with two different hardware platforms with different operating systems and different CPU architectures and different build chains. The math had to agree to the same bitwise results. They had to turn off almost all compiler optimisations to achieve this, and you had to handle lots of corner cases in code: can't trust NaN or Infinity or underflow to be portable.
They could transparently load balance a user from one different backend platform to the other with zero visible difference to the user.
Guessing not knowledge but I expect it was called that from the accidental occurence of color on broadcast TV wben the image includes something with black-and-white stripes at the right spacing. So the name predates the intentional usage by the comphter system.
I think in this alternate universe the Apple-II analog would be the first cheap computer that could run a spreadsheet. That really takes a 40 column display. So I think it would have waited for the 2mhz 6502 to handle the doubled line frequency.
I do think there is a strong possibility the people in charge in the US government believe an Iran state sponsored terrorism attack would be a political benefit to them. Such things boost support for the sitting President, and could also give political cover for additional authoritarian acts to help them retain power. Would they do the school attack on purpose? Maybe? But for sure they keep the war going until they generate the response they are looking for...
Is this something European style privacy laws would protect against? Though given the US political situation we are far from being able to enact any kind of anti-authoritarian protections...
You can increase your chances by crafting the laws differently, at least.
A law that says the government can't ask for this stuff doesn't help very much. They'll ignore it when it suits them.
A law that says it's illegal for private companies to hand it over would be better. When caught between a request from the government and a law that says they're not allowed to honor that request, there's a good chance they'll obey the law rather than the rogue agency.
A law that says it's illegal for private companies to collect this data in the first place would be even better. It could still be worked around, but it's more likely to be uncovered, and they'd only get data after the point where they convinced a company to start collecting it.
- .. including any hidden legitimate interest sections that are being treated as a second, opt-out "consent" for things that really don't actually qualify as legitimate interest
- and the companies actually followed it
Then in theory the companies won't have that data. But doing 1 is tedious, companies exercise dark patterns to avoid you doing 2, and it's hard to audit if they've done 3, so most people are probably in those data sets.
Also, a government likely to buy this data for purposes like in the original article, is unlikely to be the type of government that goes around slapping companies for not complying with privacy regulation on that data.
“Would European-style privacy laws protect against this?” is the kind of question that sounds more clarifying than it actually is, because it collapses about five separate problems into one vague gesture at “Europe.”
The issue here isn’t simply “lack of privacy law.” It’s:
1. apps collecting precise location data in the first place,
2. adtech infrastructure broadcasting that data through RTB,
3. brokers aggregating and reselling it,
4. government agencies buying it to avoid the constraints that would apply if they tried to collect it directly, and
5. regulators failing to stop any of the above in a meaningful way.
European law is relevant to some of that, but not as a magic shield. GDPR and ePrivacy principles are obviously more restrictive on paper than the US free-for-all, especially around consent, purpose limitation, data minimization, and downstream reuse. But “on paper” is doing a lot of work there. Europe has had years of complaints about RTB specifically, and yet the adtech ecosystem did not exactly disappear. That should tell you something.
So the real answer is: yes, a stronger privacy regime can help, but no, this is not a problem that gets solved by vaguely importing “European-style privacy laws” as a concept. If the underlying business model still allows mass collection, opaque sharing, and resale of location data, then state access is a policy choice away. Governments don’t need to build a panopticon if the commercial sector already did it for them.
Also, the most important legal question here is not just whether private companies should be allowed to collect/sell this data. It’s whether the government should be allowed to buy commercially available data to do an end-run around constitutional and statutory limits. That is a distinct issue. You need rules for both the commercial market and state procurement, otherwise the state just shops where the Fourth Amendment doesn’t reach.
In other words, the contrast is not “Europe = protected, US = authoritarian.” The contrast is between systems that at least attempt to constrain collection and reuse, and systems that let surveillance markets mature first and ask questions later. Even in Europe, enforcement gaps, law-enforcement carveouts, and institutional incentives matter enormously.
So if the goal is to understand the story, the useful question isn’t “would Europe stop this?” It’s “what combination of collection limits, resale bans, procurement bans, audit requirements, and enforcement would actually make this impossible in practice?” Anything short of that is mostly aesthetics.
Very clearly put, and I'd only emphasise that without the final "enforcement" point of that, the other points become entirely irrelevant. While European regulators have imposed some significant sounding fines on prominent entities, they generally work out to be "less than the value gained by doing the thing in the first place" - or at least close enough to that for the entity to not consider it too negative/a future deterrent.
Unless you have some body which is a) serious about enforcement, b) sufficiently toothful to make a dent and c) not undermined by wider geopolitical posturing or economic neutering, you can have all of the regulation you might want and still end up in the same place. I'm not arguing that we shouldn't try and control this, but that we have some extremely large genies to stuff back into bottles along the way.
I think it could be solved still pseudononymously: introduce a "vouch" button that allows a user to vouch that another user is human. This is consequential both for the vouched-for and vouching accounts. Run a page-rank style algorithm on the graph of vouches to generate a certainty score for the humanity of each account. For repeated posters this should converge to a correct answer fairly quickly. There is still a challenge for green accounts, but having degraded experience for new users is not a doom scenario for the site.
For purely historical reasons the C/C++ stack is "small" with exactly how small being outside of programmer control. So you have to avoid using the stack even if it would be the better solution. Otherwise you risk your program crashing/failing with stack overflow errors.
With Linux the stack size is a process limit, set with ulimit (default 8MB?). You can even set it to unlimited if you want, meaning that essentially (but not quite) the stack and heap grow towards each other only limited by the size of the address space.
ulimit only affects the main program stack though. if you are using multi-threading then there is a per-thread stack limit, which you can configure with pthreads, but not until C++23 for std::thread.
A related thing that bugs me is how many scam search results come up and are prioritised if you search for "uk eta" or similar. On google for me the real site is sandwiched in positon 4 after 3 and before 2 additional paid sponsored scam sites each with large block sections in the search results.
Scammers pay the the same good money for advertising space as legitimate companies. Google can profit from it, so there's no incentive to kick bad actors of their advertising platform.
In New York the biggest driver behind technology is the state testing regime. Make the case to your administration that the chromebooks are insufficient for the state testing program and they will come up with the funds for upgrades.
They could transparently load balance a user from one different backend platform to the other with zero visible difference to the user.
reply