Explanable Artificial Intelligence or XAI (Explanable Artificial Intelligence) is a form of artificial intelligence that aims to describe its purpose, logic, and decisions in a way that is understandable to the average person. Often mentioned in conjunction with deep learning, it plays a key role in the non-discriminatory, responsible and transparent machine learning model known as FAT ML (Fair, Accountability, and Transparency in Machine Learning).
XAI provides comprehensive information about the decision-making process of an AI program by revealing:
- Points of strength and weakness
- The exact criteria used to reach a decision
- The reasons that prompted him to make such a decision more than others
- The appropriate level of confidence according to different types of decision
- Types of mistakes he’s likely to make
- Error correction method
One of the big goals of XAI is Accountability/Accountability. Until now, AI systems were essentially black boxes. If we know the input and output data, the algorithms that lead to the decision are publicly proprietary or not very clear, even when the internal logic mechanisms are freely available in the open source.
With the spread of artificial intelligence, knowing how to deal with distortions and the issue of trust is more important than ever. Note, for example, that one of the provisions of the General Data Protection Regulation (GDPR) in the European Union provides for a right of interpretation.
This profile has been updated in October 2018
Learn more about decision making and analytical tools