HN2new | past | comments | ask | show | jobs | submitlogin

Copied from the end of the page:

What the law does: SB 53 establishes new requirements for frontier AI developers creating stronger:

Transparency: Requires large frontier developers to publicly publish a framework on its website describing how the company has incorporated national standards, international standards, and industry-consensus best practices into its frontier AI framework.

Innovation: Establishes a new consortium within the Government Operations Agency to develop a framework for creating a public computing cluster. The consortium, called CalCompute, will advance the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation.

Safety: Creates a new mechanism for frontier AI companies and the public to report potential critical safety incidents to California’s Office of Emergency Services.

Accountability: Protects whistleblowers who disclose significant health and safety risks posed by frontier models, and creates a civil penalty for noncompliance, enforceable by the Attorney General’s office.

Responsiveness: Directs the California Department of Technology to annually recommend appropriate updates to the law based on multistakeholder input, technological developments, and international standards.



I don't see these, did the URI get switched? Anyone have orig?


So the significant regulatory hurdle for companies that this SB introduces is... "You have to write a doc." Please tell me there's actual meat here.


> This product contains AI known in the state of California to not incorporate any national standards, international standards, or industry-consensus best practices into its framework

Compliance achieved.


So they are going to give a bunch of money to Nvidia that they don't have to build their own llm hosting data center?


it sounds like a nothing burger? Pretty much the only thing tech companies have to do in terms of transparency is create a static web page with some self flattering fluff on it?

I was expecting something more like a mandatory BOM style list of "ingredients", regular audits and public reporting on safety incidents etc etc


By putting "ethical" in there it essentially gives the California AG the right to fine companies that provide LLMs capable of expressing controversial viewpoints.


I only see "ethical" under the innovation / consortium part. Don't see how that applies to people producing LLMs outside of the consortium?


This is so watered down and full of legal details for corps to loophole into. I like the initiative, but I wouldn’t count on safety or model providers being forced to do the right thing.

And when the AI bubble pops, does it also prevent corps of getting themselves bailed out with taxpayer money?


At least a bunch of lawyers and AI consultants (who conveniently, are frequently also lobbyists and consultants for the legislature) now get some legally mandated work and will make a shit ton more money!


and what nobody seems to notice, that last part looks like it was generated by Anthropic's Claude (it likes to make bolded lists with check emojis, structured exactly in that manner). Kind of scary implying that they could be letting these models draft legislation


Its possible that ai was used for this summary section, which isn't as scary as you make it. It's def scary that ai is used in a legislative doc at all.


Correct, but yes as you point out in the second half, I don't doubt that if they're using it for summaries then they're likely using it in daily work.


Legislators around the world have been doing that for a while now.


More waste and graft so they can extort money out of the private sector to their mafia. Got it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: