{"id":47170,"date":"2025-09-09T03:37:12","date_gmt":"2025-09-09T08:37:12","guid":{"rendered":"https:\/\/ustower.net\/?p=47170"},"modified":"2025-09-09T03:37:16","modified_gmt":"2025-09-09T08:37:16","slug":"anthropic-backs-california-bill-that-would-mandate-ai-transparency-measures","status":"publish","type":"post","link":"https:\/\/ustower.net\/?p=47170","title":{"rendered":"Anthropic backs California bill that would mandate AI transparency measures"},"content":{"rendered":"\n<p class=\"has-medium-font-size\">Artificial intelligence developer Anthropic became the first major tech company Monday to endorse a California bill that would regulate the most advanced artificial intelligence models.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Proposed by state Sen. Scott Wiener, SB 53, if passed, would create the first broad legal requirements for large developers of AI models in the United States.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Among other conditions, the bill would require large AI companies offering services in California to create, publicly share and adhere to safety-focused guidelines and procedures stipulating how each company attempts to mitigate risks from AI. The bill would also strengthen whistleblower requirements by creating stronger pathways for employees to flag concerns about severe or potentially catastrophic risks that might otherwise go unreported.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">\u201cWith SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety,\u201d Anthropic said in a statement.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The bill would largely codify existing voluntary commitments made by the world\u2019s largest AI companies, emphasizing transparency and attention to risks from advanced AI systems. For example, Anthropic, OpenAI, Google, Meta and other companies have already committed to assessing how their products could be used for nefarious purposes and to lay out mitigations to prevent these threats. Recent research has shown that AI models can help users execute cyberattacks and lower barriers to acquiring biological weapons.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">SB 53 would make many of those commitments mandatory, requiring companies to post their approaches to AI risk on their websites and to share summaries of \u201ccatastrophic risk\u201d assessments directly with a state-level office.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The new California bill would apply only to AI companies building cutting-edge models that demand massive computing power. Within that subset of AI companies, the strictest requirements in the bill would apply only to those with annual revenues exceeding $500 million.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">SB 53 would also establish an emergency reporting system through which an AI developer or members of the public could report critical safety incidents related to a model.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">\u201cAnthropic is a leader on AI safety, and we\u2019re really grateful for the company\u2019s support,\u201d Wiener told NBC News.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The bill appears likely to pass, having received overwhelming support in both the Assembly and the Senate in recent voting rounds. The Legislature must cast its final vote on the bill by Friday night.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">\u201cFrontier AI companies have made many voluntary commitments for safety, often without following through. This legislation takes a small but important first step toward making AI safer by making many of these voluntary commitments mandatory,\u201d Dan Hendrycks, executive director of the Center for AI Safety, told NBC News. \u201cWhile we need much more rigorous regulation to manage AI risks, SB 53 \u2014 and Anthropic\u2019s public support for it \u2014 are an encouraging development.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\">However, industry trade groups like the Consumer Technology Association (CTA) and the Chamber of Progress are highly critical of the bill. The CTA said last week on X, \u201cCalifornia SB 53 and similar bills will weaken California and U.S. leadership in AI by driving investment and jobs to states or countries with less burdensome and conflicting frameworks.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\">SB 53 is an updated, somewhat-narrower version of a similar bill Wiener proposed last year. That bill, called SB 1047, attracted widespread scrutiny from AI developers, including OpenAI and initially Anthropic, in addition to industry trade groups like the Chamber of Progress and prominent Silicon Valley investing firms like Andreesen Horowitz. Critics attacked SB 1047\u2019s scope and language about potential penalties in case AI models caused \u201ccritical harm.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Unlike SB 53, SB 1047 would have required developers to undergo annual third-party audits of their adherence to the law and barred developers from releasing models that carried an \u201cunreasonable risk\u201d of individuals using the model to cause critical harms.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">SB 1047 was passed by the Legislature but vetoed by Gov. Gavin Newsom, who said it would throttle AI development and \u201cslow the pace of innovation.\u201d Several commentators and bill proponents argued that critics had misrepresented the bill\u2019s contents and that industry lobbying played a key role in the bill\u2019s veto.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">After the veto, Newsom formed a working group charged with providing recommendations for a revised version of SB 1047. Led by a group of AI experts, the working group provided its recommendations in the California Report on Frontier AI Policy in June.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Originally introduced in January, SB 53 incorporates many of the working group\u2019s recommendations, emphasizing transparency and the verification of commitments from leading AI labs.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">\u201cWe modeled the bill on that report,\u201d Sen. Wiener said. \u201cWhereas SB 1047 was more of a liability-focused bill, SB 53 is more focused on transparency.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Helen Toner, interim director of the Center for Security and Emerging Technology at Georgetown University, highlighted the growing consensus on the need for more insight into frontier AI companies\u2019 practices. \u201cSB 53 is primarily a transparency bill, and that\u2019s no coincidence,\u201d Toner said. \u201cThe need for more transparency from frontier AI developers is one of the AI policy ideas with the most consensus behind it.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Anthropic agreed. \u201cWe\u2019ve long advocated for thoughtful AI regulation and our support for this bill comes after careful consideration of the lessons learned from California\u2019s previous attempt at AI regulation,\u201d it said in its statement.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Any AI regulation passed in California would most likely have a significant impact on AI development nationally and around the world, as California is home to dozens of the world\u2019s leading AI companies.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">\u201cCalifornia is really at the beating heart of AI innovation, and we should also be at the heart of a creative AI safety approach,\u201d Wiener said.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The role of state legislation is a key issue in AI policy debates, as industry actors, including Anthropic competitor OpenAI, argue that a comprehensive, uniform approach to AI at the federal level is required \u2014 not a collage of state laws.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">The recently enacted Big Beautiful Bill federal spending package nearly included an amendment to prohibit states from passing AI-related legislation for 10 years, but the amendment was scratched in a late-night reversal.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">OpenAI\u2019s director of global affairs, Chris Lehane, responded to Anthropic\u2019s announcement by reaffirming OpenAI\u2019s preference for federal regulation. \u201cAmerica leads best with clear, nationwide rules, not a patchwork of state or local regulations,\u201d he wrote early Monday afternoon on LinkedIn.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Anthropic acknowledged the tension in its statement Monday but said SB 53 is a step in the right direction given federal inaction. \u201cWhile we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won\u2019t wait for consensus in Washington,\u201d it wrote.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">Wiener said: \u201cIdeally we would have comprehensive, strong pro-safety, pro-innovation federal law in this space. But that has not happened, so California has a responsibility to act. I would prefer federal regulation, too, but I\u2019m not holding my breath for that.\u201d<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/anthropic-backs-californias-sb-53-ai-bill-rcna229908\">Nbcnews<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence developer Anthropic became the first major tech company Monday to endorse a California bill that would regulate the most advanced artificial intelligence models. Proposed by state Sen. Scott Wiener, SB 53, if passed, would create the first broad legal requirements for large developers of AI models in the United States. Among other conditions, [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":47171,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5783],"tags":[5224,22766,34590,24828,2140],"class_list":["post-47170","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-sci-tech","tag-ai","tag-anthropic","tag-california-bill","tag-developers","tag-support"],"_links":{"self":[{"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/posts\/47170","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/ustower.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=47170"}],"version-history":[{"count":1,"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/posts\/47170\/revisions"}],"predecessor-version":[{"id":47172,"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/posts\/47170\/revisions\/47172"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ustower.net\/index.php?rest_route=\/wp\/v2\/media\/47171"}],"wp:attachment":[{"href":"https:\/\/ustower.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=47170"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ustower.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=47170"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ustower.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=47170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}