Phản đề xuất luật AI tại California: Cơ hội cho các nhà phát triển và mô hình nhỏ phát triển mạnh mẽ

Đạo luật AI của California bị từ chối có thể giúp cho những nhà phát triển và mô hình nhỏ phát triển mãi. Tham gia cùng cộng đồng thông tin hàng ngày và hàng tuần để cập nhật tin tức mới nhất và nội dung độc quyền về việc phủ sóng AI hàng đầu trong ngành. Tìm hiểu thêm. Thống đốc California Gavin Newsom đã từ chối Đạo luật SB 1047, dự định làm thay đổi cảnh quan phát triển AI trong bang và cả nước. Lệnh từ chối công bố vào Chủ Nhật có thể cho phép các công ty AI chứng minh rằng họ có thể bảo vệ người dùng một cách chủ động khỏi các rủi ro AI. SB 1047 sẽ yêu cầu các công ty AI phải bao gồm một “nút tắt” cho các mô hình, thực hiện một giao thức an toàn bằng văn bản và nhận một bên kiểm định an toàn bên ngoài trước khi bắt đầu huấn luyện mô hình. Nó cũng sẽ cho phép luật sư tổng thống của California truy cập báo cáo kiểm định và quyền kiện các nhà phát triển AI. Một số cựu chiến binh trong ngành AI tin rằng đạo luật có thể tạo ra tác động ảnh hưởng đến sự phát triển AI. Nhiều người trong ngành cảm ơn Newsom vì đã từ chối đạo luật, ghi nhận rằng việc từ chối đạo luật có thể bảo vệ việc phát triển mã nguồn mở trong tương lai. #California #AI #Veto #Newsom #SB1047 #NgànhAI #PhátTriểnAI #ĐạoLuậtAI #CảnhQuanAI #SựKiệnAI Nguồn: https://venturebeat.com/ai/california-ai-bill-veto-could-allow-smaller-devs-models-to-flourish/

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


California Gov. Gavin Newsom vetoed SB 1047, the bill that many believed would change the landscape of AI development in the state and the country. The veto published on Sunday could give AI companies the ability to show they can proactively protect users from AI risks.

SB 1047 would have required AI companies to include a “kill switch” to models, implement a written safety protocol and get a third-party safety auditor before starting to train models. It would have also given California’s attorney general access to an auditor’s report and the right to sue AI developers.

Some AI industry veterans believed the bill could have a chilling effect on AI development. Many in the industry thanked Newsom for vetoing the bill, noting the veto could protect open-source development in the future. Yann Le Cun, chief AI scientist at Meta and a vocal opponent of SB 1047, posted on X (formerly Twitter) that Newsom’s decision was “sensible.”

Prominent AI investor and general manager of Andreessen Horowitz Marc Andreessen said Newsom had sided “with California Dynamism, economic growth, and freedom to compute.”

Other industry players also weighed in, citing that while they believe regulation in the AI space is necessary, it should not make it harder for smaller developers and smaller AI models to flourish. 

“The core issue isn’t the AI models themselves; it’s the applications of those models,” said Mike Capone, CEO of data integration platform Qlik, in a statement sent to VentureBeat. “As Newsom pointed out, smaller models are sometimes deployed in critical decision-making roles, while larger models handle more low-risk tasks. That’s why we need to focus on the contexts and use cases of AI, rather than the technology itself.”

He added regulatory frameworks should focus on “ensuring safe and ethical usage” and supporting best AI practices. 

Coursera co-founder Andrew Ng also said the veto was “pro-innovation” and would protect open-source development. 

It is not just corporations hailing the veto. Dean Ball, AI and tech policy expert at George Mason University’s Mercatus Center said the veto “is the right move for California, and for America more broadly.” Ball noted that the bill targeted model size thresholds that are becoming out of date, which would not encompass recent models like OpenAI’s o1

Lav Varshney, associate professor of electrical and computer engineering, at the University of Illinois’ Grainger College of Engineering, noted the bill penalized original developers for the actions of those who use the technology.

“Since SB 1047 had provisions on the downstream uses and modifications of AI models, once it left the hands of the original developers, it would have made it difficult to continue innovating in an open-source manner,” Varshney told VentureBeat. “Shared responsibility among the original developers and those that fine-tune the AI to do things beyond the knowledge (and perhaps imagination) of the original developers seems more appropriate.”

Improving existing guard rails

The veto, though, could allow AI model developers to strengthen their AI safety policies and guardrails.

Kjell Carlsson, head of AI strategy at Domino Data Lab, said this presents an opportunity for AI companies to examine their governance practices closely and embed these in their workflows. 

“Enterprise leaders should seize this opportunity to proactively address AI risks and protect their AI initiatives now. Rather than wait for regulation to dictate safety measures, organizations should enact robust AI governance practices across the entire AI lifecycle: establishing controls over access to data, infrastructure and models, rigorous model testing and validation, and ensuring output auditability and reproducibility,” said Carlsson. 

Navrina Singh, founder of AI governance platform Credo AI, said in an interview with VentureBeat that while SB 1047 had good points around auditory rules and risk profiling, it showed there is still a need to understand what needs to be regulated around AI.

“We want governance to be at the center of innovation within AI, but we also believe that those who want to succeed with AI want to lead with trust and transparency because this is what customers are demanding of them,” Singh said. She added while it’s unclear if SB 1047’s veto would change the behaviors of developers, the market is already pushing companies to present themselves as trustworthy.

Disappointment from others 

However, not everyone is hailing Newsom’s decision, with tech policy and safety groups condemning the decision. 

Nicole Gill, co-founder and executive director of the non-profit Accountable Tech, said in a statement that Newsom’s decision “is a massive giveaway to Big Tech companies and an affront to all Americans who are currently the uncontested guinea pigs” of the AI industry. 

“This veto will not ‘empower innovation’ – it only further entrenches the status quo where Big Tech monopolies are allowed to rake in profits without regard for our safety, even as their AI tools are already threatening democracy, civil rights, and the environment with unknown potential for other catastrophic harms,” Gill said. 

The AI Policy Institute echoed this sentiment, with executive director Daniel Colson saying the decision to veto “is misguided, reckless, and out of step with the people he’s tasked with governing.”

The groups said California, where the majority of AI companies in the country are located, will allow AI development to go unchecked despite the public’s demand to rein in some of its capabilities. 

The United States does not have any federal regulation around generative AI. While some states have developed policies on AI usage, no law imposes rules around the technology. The closest federal government policy in the country is an executive order from President Joe Biden. The executive order laid out a plan for agencies to use AI systems and asked AI companies to submit voluntarily models for evaluation before public release. OpenAI and Anthropic agreed to let the government test its models. 

The Biden administration has also said it plans to monitor open-weight models for potential risks.

[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *