Điều chỉnh trí tuệ nhân tạo trong nguy cơ: Đường đi trong thời kỳ không chắc chắn

Quản lý trí tuệ nhân tạo trong nguy cơ: Điều hướng trong thời kỳ không chắc chắn! Tham gia bản tin hàng ngày và hàng tuần của chúng tôi để cập nhật thông tin mới nhất và nội dung độc quyền về việc bao phủ trí tuệ nhân tạo hàng đầu trong ngành. Tìm hiểu thêm tại điều này! #DữLiệuNgườiQuyếtĐịnh #HãyTraoTin #HãyViếtTin Tòa án Tối cao gần đây đã đánh sập quyền lực của các cơ quan liên bang, như đã được Morning Brew ghi nhận. Chưa đến một năm trước, nỗ lực quản lý trí tuệ nhân tạo đang đạt được đà tăng mạnh, được đánh dấu bởi các cột mốc quan trọng như Hội nghị An toàn AI tại Vương quốc Anh, Sắc lệnh AI của Biden và Đạo luật AI của EU. Tuy nhiên, á quân tư pháp gần đây và sự đổi lưu chính trị tiềm năng đang dẫn đến nhiều sự không chắc chắn về tương lai của quản lý trí tuệ nhân tạo tại Hoa Kỳ. Bài viết này khám phá hệ quả của những phát triển này đối với quản lý trí tuệ nhân tạo và các thách thức tiềm năng phía trước. #QuảnLýTríTuệNhântạo #LuậtLệChínhThức Quyết định gần đây của Tòa án Tối cao trong vụ Loper Bright Enterprises v. Raimondo đã làm suy yếu quyền lực của các cơ quan liên bang trong việc quản lý các lĩnh vực khác nhau, bao gồm cả trí tuệ nhân tạo. Bằng việc đảo ngược một tiền lệ có từ 40 năm có tên là “Chevron deference”, quyết định của tòa án chuyển quyền lực để giải thích những luật chưa rõ ràng được Quốc hội thông qua từ các cơ quan liên bang sang quyền lực tư pháp. #PhiHànhPháp #SựChuyênNghiệp #ĐổiLưuLuận Luật lệ hiện hành thường mơ hồ trong nhiều lĩnh vực, bao gồm những lĩnh vực liên quan đến môi trường và công nghệ, để lại việc diễn giải và quản lý cho các cơ quan. Việc mơ hồ này trong pháp lệnh thường là có chủ ý, với cả lý do chính trị lẫn thiết thực. Tuy nhiên, bây giờ, bất kỳ quyết định quản lý nào của một cơ quan liên bang dựa vào những luật đó đều có thể bị thách thức dễ dàng hơn trong tòa án, và các thẩm phán liên bang có quyền lực hơn để quyết định ý nghĩa của một luật. #LuậtLệ #KiếnThứcChuyênMôn Giải pháp là luôn đạt được pháp lệnh rõ ràng từ Quốc hội nhưng ‘điều đó càng đúng hơn bây giờ’, giáo sư Ellen Goodman chuyên về pháp luật liên quan đến chính sách thông tin tại Đại học Rutgers đã nói trong FedScoop. #QuyềnLựcLiênBang #ThíchNghiChínIoHoaMỹ Dù bất kỳ đảng chính trị nào giành chiến thắng tại Nhà Trắng và kiểm soát Quốc hội, sẽ có một môi trường quản lý trí tuệ nhân tạo khác nhau ở Hoa Kỳ. #MôiTrườngCôngTrình #QuyĐịnhThịTrường #HợpPháp…\n#ƯuƠnVậtCầtKế…\n#TăngLuậtLệ… #ThựcHànhTựDo #DànKXRộngMở #MộtMỹMạnhMẽTrongTríTuệNhântạo… #ThậtBấQuanMỹTrongAI… Xin chân thành cảm ơn vì đã đọc bài viết! #KhámPháThôngTinMới #HãyGópVàiÍch #CùngNhauTiếnBộ #DữLiệuNgườiQuyếtĐịnh #DataDecisionMakers #DataTechJourney #DữLiệuDữLiệu #DữLiệuLàQuyềnLưuản #DữLiệuDầnNuôi # Nguồn: https://venturebeat.com/ai/ai-regulation-in-peril-navigating-uncertain-times/

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


The Supreme Court recently took a sledgehammer to federal agencies’ powers, as noted by Morning Brew.

Less than a year ago, the drive for AI regulation was gaining significant momentum, marked by key milestones such as the AI Safety Summit in the U.K., the Biden Administration’s AI Executive Order, and the EU AI Act. However, a recent judicial decision and potential political shifts are leading to more uncertainty about the future of AI regulation in the U.S. This article explores the implications of these developments on AI regulation and the potential challenges ahead.

The Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo weakens federal agencies’ authority to regulate various sectors, including AI. In overturning a precedent dating back forty years known as “Chevron deference,” the court decision shifts the power to interpret ambiguous laws passed by Congress from federal agencies to the judiciary.

Agency expertise vs. judicial oversight

Existing laws are often vague in many fields, including those related to the environment and technology, leaving interpretation and regulation to the agencies. This vagueness in legislation is often intentional, for both political and practical reasons. Now, however, any regulatory decision by a federal agency based on those laws can be more easily challenged in court, and federal judges have more power to decide what a law means. This shift could have significant consequences for AI regulation. Proponents argue that it ensures a more consistent interpretation of laws, free from potential agency overreach.

However, the danger of this ruling is that in a fast-moving field like AI, agencies often have more expertise than the courts. For example, the Federal Trade Commission (FTC) focuses on consumer protection and antitrust issues related to AI, the Equal Employment Opportunity Commission (EEOC) addresses AI use in hiring and employment decisions to prevent discrimination and the Food and Drug Administration (FDA) regulates AI in medical devices and software as a medical device (SaMD).

These agencies purposely hire people with AI knowledge for these activities. The judicial branch has no such existing expertise. Nevertheless, the majority opinion said that “…agencies have no special competence in resolving statutory ambiguities. Courts do.” 

Challenges and legislative needs

The net effect of Loper Bright Enterprises v. Raimondo could be to undermine the ability to set up and enforce AI regulations. As stated by the New Lines Institute: “This change (to invalidate Chevron deference) means agencies must somehow develop arguments that involve complex technical details yet are sufficiently persuasive to an audience unfamiliar with the field to justify every regulation they impose.”

The dissenting view from Justice Elena Kagan disagreed on which body could more effectively provide useful regulation. In one fell swoop, the (court) majority today gives itself exclusive power over every open issue — no matter how expertise-driven or policy-laden — involving the meaning of regulatory law. As if it did not have enough on its plate, the majority turns itself into the country’s administrative czar.” Specific to AI, Kagan said during oral arguments of the case: “And what Congress wants, we presume, is for people who actually know about AI to decide those questions.” 

Going forward, then, when passing a new law affecting the development or use of AI, if Congress wished for federal agencies to lead on regulation, they would need to state this explicitly within the legislation. Otherwise, that authority would reside with the federal courts. Ellen Goodman, a professor who specializes in law related to information policy at Rutgers University said in FedScoop: “The solution was always getting clear legislation from Congress but ‘that’s even more true now.’” 

Political landscape

However, there is no guarantee that Congress would include this stipulation as doing so is subject to the makeup of the body. A conservative viewpoint expressed in the recently adopted platform of the Republican party clearly states an intention to overturn the existing AI Executive Order. Specifically, the platform says: “We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology.” Per AI industry commentator Lance Eliot in Forbes: “This would presumably involve striking out the stipulations on AI-related reporting requirements, AI evaluation approaches, (and) AI uses and disuses limitations.” 

Based on reporting in another Forbes article, one of the people influencing the drive to repeal the AI Executive Order is tech entrepreneur Jacob He “believes that existing laws already govern AI appropriately, and that ‘a morass of red tape’ would harm U.S. competition with China.” However, it is those same laws and ensuing interpretation and regulation by federal agencies that have now been undercut by the decision in Loper Bright Enterprises v. Raimondo.

In lieu of the current executive order, the platform adds: “In its place, Republicans support AI development rooted in free speech and human flourishing.” New reporting from the Washington Post cites an effort led by allies of former president Donald Trump to create a new framework that would, among other things, “make America first in AI.” That could include reduced regulations as the platform states an intention to “cut costly and burdensome regulations,” especially those in their view that “stifle jobs, freedom, innovation and make everything more expensive.”

Regulatory outlook

Regardless of which political party wins the White House and control of Congress, there will be a different AI regulatory environment in the U.S. 

Foremost, the Supreme Court’s decision in Loper Bright Enterprises v. Raimondo raises significant concerns about the ability of specialized federal agencies to enforce meaningful AI regulations. In a field as dynamic and technical as AI, the likely impact will be to slow or even thwart meaningful AI regulation. 

A change in leadership at the White House or Congress could also change AI regulatory efforts. Should conservatives win, it is likely there will be less regulation and that remaining regulation will be less restrictive on businesses developing and using AI technologies. 

This approach would be in stark contrast to the UK, where  the recently elected Labour party promised in its manifesto to introduce “binding regulation on the handful of companies developing the most powerful AI models.” The U.S. would also have a far different AI regulatory environment than the EU with its recently passed AI Act. 

The net effect of all these changes could be less global alignment on AI regulation, although it is unknown how this might impact AI development and international cooperation. This regulatory mismatch could complicate international research partnerships, data sharing agreements and the development of global AI standards. Less regulation of AI could indeed spur innovation in the U.S. but could also lead to increased concerns about AI ethics and safety, and the potential impact of AI on jobs. This unease could in turn have a negative impact on trust in AI technologies and the companies that build them. 

It is possible that in the face of weakened regulations, major AI companies would proactively collaborate on ethical uses and safety guidelines. Similarly, there could be a greater focus on developing AI systems that are more interpretable and easier to audit. This could help companies stay ahead of potential negative feedback and show responsible development. 

At a minimum, there will be a period of greater uncertainty about AI regulation. As the political landscape shifts and regulations change, it is crucial for policymakers, industry leaders and the tech community to collaborate effectively. Unified efforts are essential to ensure that AI development remains ethical, safe and beneficial for society.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *