Probably some fraction of the civilians blown up by Israeli terrorist phone strikes and bombing raids; there's a reason Hezbollah maintains some level of support in the region.
35-ish years ago there was a pitch for cheap, high velocity, spin-stabilized rockets that were deployed in dense pods on the A-10. The rocket's seeker could divert some small amount of thrust at an angle for guidance, but otherwise that was it. I can't recall if it ever made it out of the pilot phase, but obviously nothing new under the sun.
There are other mitigations though: You can pass expected owner accountId on S3 operations and you can create SCPs that restrict the ability of roles to write to buckets outside the account. Unless you have an account that does many cross-account S3 writes, the latter is a simple tool to prevent exfiltration. Well, simple assuming that you're already set up with an Organization and can manage SCPs.
Having worked in this space for years, it's not nearly as bad as you think. IaC tools can all look up the accountId/region for the current execution context and you can use SSM Parameters to give you a helpful alias in your code.
Also, if you have a bunch of accounts, it's far easier for troubleshooting that the accountId is in the name: "I can't access bucket 'foo'" vs. "I can't access bucket 'foo-12345678901'"
There's an explicit tension: SWEs would love that as a "get out of jail free" card, but their management chain is being evaluated by ajassy on AI/ML adoption. Admitting AI code as the root cause of a CoE is gonna look really bad unless/until your peers are also copping to it.
I think its a question 2 or 3 in a why chain, but 4 and 5 need to be why the agent screwed up, and there needs to be action items that are around giving the ai better guardrails, context, or tooling.
"get a person to look at it" is a cop-out action item, and best intentions only. nothing that you could actually apply to make development better across the whole company
reply