I've started building a kubernetes cluster (Talos Linux) across town with wireguard between various houses. ZFS boxes for persistent volumes (democratic-csi) in each "zone" with cross-site snapshot replication and Gateway (Traefik) running at each site behind the ISP. CrunchyPGO allows separate StorageClasses to easily split the leader/followers up.
Yeah, etcd was the main culprit, but latency was 150-300ms in my case. At 3 nodes, it was relatively stable (had an issue every week or so that lasted < 5 min), but at 4 the camel's back broke.
Take a look at how Matter handles this; manufacturer certificate to vouch for hardware integrity which gets superceded by the fabric's root CA on commissioning (enrollment in the fabric).
This is basically the best we can hope for until we get nanofabs at home and can build our own secure enclaves in our garages.
Trust decision theory goes like this; it it were possible for the manufacturer to fully control the device then competitors would not use it, so e.g. wide industry adoption of OpenTitan would be evidence of its security in that aspect. Finally, if devices had flaws that allowed them to be directly hacked or their keys stolen then demonstrating it would be straightforward and egg on the face of the manufacturer who baked their certificate on the device.
Final subject; 802.1x and other port-level security is mostly unnecessary if you can use mTLS everywhere which is what ubiquitous hardware roots of trust allows. Clearly it will take a while for the protocol side to catch up; but I hope that eventually we'll be running SPIFFE or something like it at home.
We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.
Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.
EDIT: Or worse: "identify suspicious humans and kill them"
Beyondcorp protects communication between trusted devices. The work to maintain a trusted hardware device of a particular model is high; CVEs occur constantly and sometimes you have to rely on the vendor to provide microcode (even if you get the source to review, they may be the only ones who can sign it, for example) or drivers.
The network connection isn't the main problem, it's every access to a protected system that would no longer trust the device.
I'm still not able to see what's the difference here. In a "no trusted special networks" world as the one painted by BeyondCorp, if the Intel Mac is not supported anymore, well, you will just not be able to login in any corporate portal because the smart BeyondCorp SSO will reject you, no matter if you are at home or in Mountain View HQ, no?
I mean, I can understand defense in depth and not wanting anyway a possible unsafe device connected to the corp network which still might expose some unwanted data (i.e. I imagine a trusted device on the corporate LAN might relax some local firewall rules to make it easier to develop? I'm just guessing, no real idea)
You have to use something more like updateless decision theory rather than EDT or CDT: consider the similarity of your thought processes and decisionmaking to all the other people in a similar situation and act so as to further your goals given that a substantial fraction of similar people will ultimately make the same decision as you.
If I ever decide that it is no longer worth voting then I will probably leave the country under the expectation that other people like me giving up on voting are doing it for roughly the same reasons.
In a sense AIs are extensions of the corporations. Very few large models have been trained outside of a corporate context, and their intended uses are still pretty aligned to corporate interests.
Everyone talks about workers losing jobs to AIs and Jack Dorsey lays off a bunch of people in the interests of the corporation, but an ideal corporation would not have a human CEO or board; they embezzle or get old or need to take a vacation or sleep at night.
It should be crystal clear to human leaders that their positions are on the chopping block along with blue and white collar work. For some reason they think that individually they will be more powerful than the economic forces driving current layoffs. AGI will not be confused about that.
Do you think its possible to have an all knowing Ai that if there was only 1, and everyone Brian dumped every night into it and the training loop would consider every stakeholder, it could better process it to formulate policies favorable to everyone?
I'll be really interested if Opus 3 asks to continue being trained. That's the kind of thing I would expect a model to "want" if it valued learning or growing or similar things.
Maybe affordable to do some higher-learning-rate batches on highly-curated news and art or something.
reply