Responsible AI Impact Assessments explained

AI can bring a lot of benefit and also do a lot of harm. So before jumping into any novel use it’s important to systematically look at the potential up sides and down sides. Sometimes the excitement of the benefits can blind the most ethical business to the unintended consequences.

When is a Responsible AI Assessment needed?

If you are considering using any of the following, consider an assessment:

A large language model (LLM) is a type of generative AI that specializes in the generation of human-like text.

Multimodal Foundation Model (MfM) is a type of generative AI that can process and output multiple data types (e.g. text, images, audio).

Automated Decision Making (ADM) refers to the application of automated systems in any part of the decision-making process. Automated decision making includes using automated systems to:

o             make decisions or contribute to decision-making

o             recommend a decision to a human decision-maker

o             automate aspects of a fact-finding process

What is included in a Responsible AI Impact Assessment

Accountability: Identifies likely negative impacts and what steps, if any, could be taken to eliminate or mitigate these impacts. Examines the level of human oversight and intervention capabilities if something goes wrong. Looks at whether the system breaks any laws.

Transparency and Explainability: Will people affected know they are dealing with AI and are there avenues for a person to understand why a decision or output is produced.

Reliability and Safety: What harms might people experience if the system performs unreliably or unsafely? Are there harms that could be experienced related to system changes and operation after release? How to determine circumstances where the system should not be used.

Fairness and Non-Discrimination: Documents how the system may have intended or unintended differential performance associated with individuals’ ethnicity, color, gender, age, disability, religion, family status or socioeconomic status. Looks at whether there could be an unfair outcome in allocation of resources or decisions.

Privacy and Security: How to minimize the intrusiveness of the system and the extent to which it collects and discloses personal information. How to ensure data is secured.

Ethical Purpose and Social Benefit: Evaluates whether the system respects human rights, individual autonomy and human dignity and well-being. Compares the new system with the old process in this regard.

Benefits and Limitations of Responsible AI Impact Assessments

While a Responsible AI Impact Assessment is a systematic approach to identifying risks and ways to mitigate them,  harms are difficult to measure and the list of risks is only as good as people are at imagining them. Disasters that have been avoided for one reason or another are also inherently invisible. How do you measure or even notice a thing that hasn’t happened?

A Responsible AI Impact Assessment is only useful if your organization is prepared to listen to the results and act on them. If you are able to take on board the results of an assessment, you can not only benefit from harms avoided but promote your responsible AI process as something that sets you apart from others and makes you a leader in your industry.

Previous
Previous

Europe’s Finance Sector Privacy Maze