Sdecoret/Dreamstime
Artifical intelligence

Can AI Explain Itself?

Aug. 26, 2020
NIST researchers set out to establish rules for making AI “self-explanatory.”

As artificial intelligence (AI) begins making more consequential decisions that affect our lives, researchers and users want these AI systems to answer a few simple questions—chief among them, why did these systems make specific decisions? Researchers at the National Institute of Standards and Technology (NIST) believe AI systems should be able to provide such information if users are ever to trust it.

To that end, a team of NIST scientists are proposing a set of four principles or rules AI systems should follow to explain their decisions. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to get debate going on what society should expect of its decision-making machines.

Those four principles are:

  • AI systems should deliver accompanying evidence or reasons for all their outputs.
  • Systems should provide explanations that are meaningful or understandable to individual users.
  • The explanation accurately reflect the system’s process for generating the output.
  • The system should only operate under the conditions it was designed or when the system has enough confidence in its output. (The idea is that if a system does not have enough confidence in its decision, it should not give the user a decision.)

Although these principles seem straightforward, the team acknowledges that individual users often have varied criteria for judging an AI’s success. For instance, the second principle—how meaningful the explanation is—can mean different things to different people, depending on their role and connection to the job the AI is doing.

The report is part of a broader NIST effort to develop trustworthy AI systems. The researchers want to do so by understanding these systems’ theoretical capabilities and limitations, at the same time improving their accuracy, reliability, security and explainability.

The authors are looking for feedback on their draft from the public. And because the subject is so broad—touching on engineering, computer science, psychology and legal studies—they hope for a wide-ranging discussion.

“AI is becoming involved in high-stakes decisions, and no one wants machines making them without understanding why,” says NIST electronic engineer Jonathon Phillips.

Understanding the reasons behind an AI’s system’s output  benefits everyone the output touches. If an AI contributes to a loan approval decision, for example, this understanding might help software designers improve the system. But applicants might want insight into the AI’s reasoning as well, either to understand why they were turned down, or, if they were approved, to help them continue acting in ways that maintain good credit ratings.

The NIST team’s work led it to compare the demands society might put on machines for explaining their decisions to those society places on individuals and groups. Do society measure up to the standards NIST is asking of AI? After exploring how human decisions hold up in light of the report’s four principles, the team concluded society does not measure up.

“Human-produced explanations for our choices and conclusions are largely unreliable,” the team notes. “Without conscious awareness, people incorporate irrelevant information into a variety of decisions from personality trait judgments to jury decisions.”

“As we make advances in explainable AI, we may find that certain parts of AI systems better meet societal expectations and goals than humans do,” says Phillips.

NIST is accepting comments on the draft until Oct. 15, 2020; for more details, visit NIST's webpage on AI explainability.

Sponsored Recommendations

The Digital Thread: End-to-End Data-Driven Manufacturing

May 1, 2024
Creating a Digital Thread by harnessing end-to-end manufacturing data is providing unprecedented opportunities to create efficiencies in the world of manufacturing.

Medical Device Manufacturing and Biocompatible Materials

May 1, 2024
Learn about the critical importance of biocompatible materials in medical device manufacturing, emphasizing the stringent regulations and complex considerations involved in ensuring...

VICIS Case Study

May 1, 2024
The team at VICIS turned to SyBridge and Carbon in order to design and manufacture protective helmet pads, leveraging the digitization and customization expertise of Toolkit3D...

What's Next for Additive Manufacturing?

May 1, 2024
From larger, faster 3D printers to more sustainable materials, discover several of the top additive manufacturing trends for 2023 and beyond.

Voice your opinion!

To join the conversation, and become an exclusive member of Machine Design, create an account today!