Most likely, those were debris from the interception of missiles flying overhead and being destroyed on their way to a military target.
AFAIK, there have been no confirmed signs of civilian sites being targeted directly, and it would also be unlikely that actual missiles would cause so little damage that you could patch your datacenter up and get it ready to go within hours.
Thanks for the links, which I've reviewed. Allow me to clarify: I meant sources that confirm that the civilian places hit (eg. hotels and residential buildings) were the actual targets.
Local and official news all say that these were hit by debris from intercepted missiles/drones (on their way to somewhere else). There is a major difference between this, vs. if those buildings were directly being targeted.
AFAICT your linked sources indicate that the oil installations and ports were targets, but not the hotels and buildings.
I'm asking in good faith as this makes a significant difference.
I don't see the large difference between a civilian port, a civilian oil facility or a civilian aluminum factory vs a hotel on the topic of whether the Iranians are capable of targeting a civilian data center, however, assuming you are curious, here goes
Finding these take time so I am sorry if this is going to be the last of these sources I'll paste, for example Bahrain luxury apartments building being hit:
If you take investments, your investors will most likely own shares of the company (except in specific early-stage scenarios like YC's SAFE). Sometimes major investors will have board seats or voting shares. This happens in normal private companies, not just public ones.
> I joined Anthropic with the impression that the responsible scaling policy was a binding pre-commitment for exactly this scenario
Pledges are generally non-binding (you can pledge to do no evil and still do it), but fulfill an important function as a signal: actively removing your public pledge to do "no evil" when you could have acted as you wished anyway, switches the market you're marketing to. That's the most worrying part IMO.
Would you be kind enough to explain your gut reaction here with logical arguments as to why this is definitely not a feature that would ever be released?
> Recursive self-improvement doesn't get around this problem. Where does it get the data for next iteration? From interactions with humans.
It wasn't true for AlphaGo, and I see no reason it should be true for a system based on math. It makes sense that a talented mathematician who's literally made of math, could build a slightly better mathematician, and so on.
AlphaGo was able to recursively self-improve within the domain of the game of go, which has an astonishingly small set of rules.
We're asking AIs to have data that covers the real physical world, plus pretty much all of human society and culture. Doing self-improvement on that without external input is a fundamentally different proposition than doing it for go.
> the real physical world, plus pretty much all of human society and culture
is only a tiny part of the problem (more data plus understanding more rules) and the main problem is "getting smarter".
You can get smarter without learning more about the world or human society and culture. I mean, that's allegedly how Blaise Pascal worked out a lot of mathematics in his teenage years.
My point is that the "getting smarter" part (not book-smart which is your physical world data, not street-smart which is your human culture data, but better-at-processing-and-solving-problems smart) is made of math. And using math to make that part better is the self-improvement that does not necessarily require human input.
I think you're strawmanning my math point from "if you're made of math and can make a trivial improvement in the math, you get a smarter n+1 program that can likely make another trivial improvement to n+2"... to "AI can solve all math" (which is not my point at all).
You seem to be generalizing item #3 from "there are limits to what AI can do with math", to "therefore, AI can't improve any math, and definitely not the very specific kind of math that is relevant to improving AI". That is a huge unjustified logical jump.
Has it ever happened on the path from Enigma to Claude Opus 4.6, that the necessary next step was to figure out a new nth Busy Beaver? Is Opus 4.6 a better Busy Beaver than Sonnet 3.5?
Or is that a mostly unrelated piece of math that is mostly irrelevant to making a "smarter" AI program from where we are today?
As I shared in a comment above, the book is released under a Creative Commons license that does not authorize sharing derived works so only the original author can distribute you an alternative version
Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.
AFAIK, there have been no confirmed signs of civilian sites being targeted directly, and it would also be unlikely that actual missiles would cause so little damage that you could patch your datacenter up and get it ready to go within hours.
reply