Big Data News Hubb
Advertisement
  • Home
  • Big Data
  • News
  • Contact us
No Result
View All Result
  • Home
  • Big Data
  • News
  • Contact us
No Result
View All Result
Big Data News Hubb
No Result
View All Result
Home News

NIST Puts AI Risk Management on the Map with New Framework

admin by admin
February 2, 2023
in News


(Image courtesy NIST)

The National Institute of Standards and Technology (NIST) today published the AI Risk Management Framework, document intended to help organizations voluntarily develop and deploy AI systems without bias and other negative outcomes. The document has a very good shot at defining the standard legal approach that organizations will use to mitigate the risks of AI in the future, says Andrew Burt, founder of AI law firm BNH.ai.

As the pace of AI development accelerates, so too do the potential harms from using AI. The NIST, at the request of the United States Congress, devised the AI Risk Management Framework (RMF) to devise a repeatable approach to creating responsible AI systems.

“Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities,” states the RMF executive summary. “With proper controls, AI systems can mitigate and manage inequitable outcomes.”

The 48-page document, which you can access here, seeks to help organizations approach AI risk management in four ways, dubbed the RMF Core functions, including Map, Measure, Manage, and Govern.

First, it encourages users to map out the AI system in its entirely, including intended business purpose and the potential harms that can result from using AI. Imagining the different ways that AI systems can have positive and negative outcomes is essential to the whole process. Business context is critical here, as is the organization’s tolerance for risk.

Map, measure, manage, and govern (NIST AI RMF)

Second, the RMF asks the ethical AI practitioner to use the maps created in the first step to determine how to measure the impacts that AI systems are having, in both a quantitative and a qualitative manner. The measurements should be conducted regularly, cover the AI systems’ functionality, examinability, and trustworthiness (avoidance of bias), and the results should be compared to benchmarks, the RMF states.

Third, organizations will use the measurements from step two to help it manage the AI system in an ongoing fashion. The framework gives users the tools to manage the risks of deployed AI systems and to allocate risk management resources based on assessed and prioritized risks, the RMF says.

The map, measure, and manage functions come together under an overarching governance framework, which gives the user the policies and procedures to implement all the necessary components of a risk mitigation strategy.

The RMF doesn’t have the force of law, and likely never will. But it does lay out a workable approach to managing risk in AI, says Burt, who co-founded BNH.ai in 2019 after working as Immuta’s chief legal counsel

“Part of the advantage of the NIST framework is that it’s voluntary, not regulatory,” Burt tells Datanami in an interview today. “That being said, I think it’s going to set the standard of care.”

The current state of American law when it comes to AI is “the Wild West,” Burt says. There are no clear legal standards, which is a concern both to the companies looking to adopt AI as well as citizens hoping not to be harmed by it.

The NIST RMF has the potential to become “a concrete, specific standard” that everybody in the U.S. can agree on, Burt says.

“From a legal perspective, if people have practices in place that are wildly divergent from the NIST RMF, I think it will be easy for a plaintiff to say ‘Hey what you’re doing is negligent or irresponsible’ or ‘Why didn’t you do this?’” he says. “This is a clear standard, a clear best practice.”

(Lightspring/Shutterstock)

BNH.ai conducts AI audits for a number of clients, and Burt foresees the RMF approach becoming the standard way to conduct AI audits in the future. Companies are quickly awakening to the fact that they need to audit their AI systems to ensure that they’re not harming users or perpetuating bias in a damaging way. In many ways, the AI cart is getting way out in front of the horse.

“The market is adopting these technologies a lot faster than they can mitigate their harm,” Burt says. “That’s where we come in as a law firm. That’s where regulations are starting to come in. That’s where the NIST framework comes in. There are all sorts of sticks and carrots that are going to, I think, help to correct this misbalance. But right now, I’d say there’s a pretty severe imbalance between the value that people are getting out of these tools and the actual risk that they pose.”

Much of the risk stems from the rapid adoption of tools like ChatGPT and other large language and generative AI models. Since these systems are trained on a corpus of data that is almost equal to the entire Internet, the amount of bias and hate-speech contained in the training data is potentially staggering.

“In the last three months, the big big change for the potential of AI to inflect harm , relates to how many people are using these systems,” Burt says. “I don’t know the numbers for ChatGPT and others, but they’re skyrocketing. These systems are starting to be deployed outside of laboratory environments in ways that are really significant. And that’s where the law comes in. That’s where risk comes in and that’s where real harms start to be generated.”

The RMF in some ways will become the American counter to the European Union’s AI Act. First proposed in 2021, the EU’s AI Act is likely to become law this year, and–with its gradations of levels of acceptable risk–will have a dramatic impact on the capability of companies to deploy AI systems.

(Drozd Irina/Shutterstock)

There are big differences between the two approaches, however. For starters, the AI Act will have the force of law, and will impose fines for transgressions. The RMF, on the other hand, is completely voluntary, and will impose change by becoming the industry standard that attorneys can cite in civil court.

The RMF is also general and flexible enough to adapt to the fast-changing AI landscape, which also puts it at odds with the AI Act, Burt says.

“I would say [the EU]s approach tends to be pretty systematic and pretty inflexible, similar to the GDPR,” Burt says. “They’re trying to really tackle everting all at once. It’s a valiant effort, but the NIST RMF is a lot more flexible. Smaller organizations with minimal resources can apply it. Large organizations with a huge amount of resources can apply it. I would say it’s a lot more of a risk-based, context-specific flexible approach.”

You can access more information about the RMF at www.nist.gov/itl/ai-risk-management-framework.

Related Items:

Europe’s New AI Act Puts Ethics In the Spotlight

Organizations Struggle with AI Bias

New Law Firm Tackles AI Liability

 



Source link

Previous Post

Introducing ChatGPT Plus

Next Post

How Amazon Devices scaled and optimized real-time demand and supply forecasts using serverless analytics

Next Post

How Amazon Devices scaled and optimized real-time demand and supply forecasts using serverless analytics

Recommended

Video Highlights: Copilot for R

March 11, 2023

How to Integrate Goldman Sachs’ Legend With Databricks Lakehouse

November 24, 2022

The Role of AI in Job Seeking

March 6, 2023

Don't miss it

News

Bill Gates Says the Age of AI Has Begun, Bringing Opportunity and Responsibility

March 25, 2023
Big Data

Techniques for training large neural networks

March 25, 2023
Big Data

O’Reilly 2023 Tech Trends Report Reveals Growing Interest in Artificial Intelligence Topics, Driven by Generative AI Advancement

March 24, 2023
Big Data

Democratizing the magic of ChatGPT with open models

March 24, 2023
News

Introducing native support for Apache Hudi, Delta Lake, and Apache Iceberg on AWS Glue for Apache Spark, Part 2: AWS Glue Studio Visual Editor

March 24, 2023
News

ChatGPT Puts AI At Inflection Point, Nvidia CEO Huang Says

March 24, 2023

big-data-footer-white

© 2022 Big Data News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Big Data
  • News
  • Contact us

Newsletter Sign Up

No Result
View All Result
  • Home
  • Big Data
  • News
  • Contact us

© 2022 Big Data News Hubb All rights reserved.