We have partnered with the Know-Center to explore AI transparency and explainability.
In recent years, AI-based systems have been used for a broad body of tasks. These tasks rely on AI-made decisions which often are of a highly sensitive nature, and pose potential risks to the well-being of individuals or social groups. Such tasks can include approving loans, managing hiring processes and making health-related decisions. For AI to be used appropriately in such sensitive situations as well as others, AI-made decisions must be understandable and reasonable to human beings.
Transparency can be defined as the understandability of a specific AI system – how well we know what happens in which part of the system. This can be a mechanism that facilitates accountability (Lepri et al. 2018). Explainability is a closely related concept (Lepri et al. 2018, Larsson and Heintz, 2020) and refers to providing information, in a reverse manner, on the logic, process, factors or reasoning that the AI system’s actions are based on. Explainable AI (XAI) can be achieved via various means, for example, by adapting existing AI systems or developing AI systems that are explainable by design. Commonly, these methods are referred to as “XAI methods”.
According to Meske et al. (2022), transparency and explainability in AI pertain to five stakeholder groups:
Motivated by the importance of the explainability of AI systems for many sensitive real-world tasks, we have co-written a white paper that:
Download it here.
We are SGS – the world’s leading testing, inspection and certification company. We are recognized as the global benchmark for sustainability, quality and integrity. Our 99,600 employees operate a network of 2,600 offices and laboratories around the world.
Units 303 & 305, 3/F, Building 22E,
Phase 3, Hong Kong Science Park,
Pak Shek Kok, New Territories, Hong Kong, China