Công cụ kiểm tra sự kháng cự của AI trước dữ liệu ‘độc hại’

gettyimages-1839917800

Bộ công cụ này kiểm tra độ chịu đựng của trí tuệ nhân tạo với dữ liệu “độc hại” #AI #NIST #PoisonedData #SafeAI #Dioptra #Innovation #Technology #DataProtection #MachineLearning #Algorithm #Security #SocietyBenefit #FakeContent #MaliciousData #ModelTesting #RiskMitigation #PublicSafety #DeepLearning #GovernmentAgency #Cybersecurity #DataSecurity #GuidanceTool #TechNews #ArtificialIntelligence #ModelPerformance #SoftwareDevelopment #InnovationSupport #ModelArchitecture #ThreatsDetection #FreeDownload #FBEvents #AIModelProtection #AIDevelopmentBenefit

Nguồn: https://www.zdnet.com/article/this-tool-tests-ais-resilience-to-poisoned-data/#ftag=RSSbaffb68

gettyimages-1839917800

alengo/Getty Images

The National Institute of Standards and Technology (NIST) is re-releasing a tool that tests how susceptible artificial intelligence (AI) models are to being “poisoned” by malicious data. 

The move comes nine months after President Biden’s Executive Order on the safe, secure, and trustworthy development of AI, and is a direct response to that order’s requirement that NIST help with model testing. NIST also recently launched a program that helps Americans use AI without falling prey to synthetic, or AI-generated, content and that promotes AI development for the benefit of society.

The tool, called Dioptra, was initially released two years ago and aims to help small- to medium-sized businesses and government agencies. Using the tool, someone can determine what sort of attacks would make their AI model perform less effectively and quantify the reduction in performance to see the conditions that made the model fail.

Also: Beware of AI ‘model collapse’: How training on synthetic data pollutes the next generation

Why does this matter?
It’s critical that organizations take steps to ensure AI programs are safe. NIST is actively encouraging federal agencies to utilize AI in various systems. AI models train on existing data, and if someone purposefully injects malicious data — say, data that made the AI ignore stop signs or speed limits — NIST points out, the results could be disastrous.

Despite all the transformative benefits of AI, NIST Director Laurie E. Locascio says the technology brings along risks that are far greater than those associated with other types of software. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation,” she notes in the release. 

Also: Safety guidelines provide necessary first layer of data protection in AI gold rush

Dioptra can test multiple combinations of attacks, defenses, and model architectures to better understand which attacks may pose the greatest threats, NIST says, and what solutions might be best. 

The tool doesn’t promise to take away all risks, but it does claim to help mitigate risk while still supporting innovation. It’s available to download for free. 


Leave a Reply

Your email address will not be published. Required fields are marked *