Explainable AI (XAI): The Key to Compliant and Ethical AI in the Age of the EU AI Act

How XAI Enhances Transparency, Fairness, and Trust in AI-Driven Decision-Making

FROM THE CTO'S DESK

Milena Georgieva

2/25/20254 min read

a sign with a question mark and a question mark drawn on it
a sign with a question mark and a question mark drawn on it

In the previous article(link) we discussed the Act AI that is valid for the whole Europen Union since 2 August 2023. We discussed the identified risk levels and what the companies need to implement as actions so that they can guarantee that their software comply with the regulations.

In this second part of this series of articles dedicated to AI, Act AI and solutions we will discuss one of the solutions that can be implemented: XAI.

So we know the danger that comes with implementing AI. The ethic issues with the AI is well discussed and exposed to multiple researches and discussions. The question is what shall we do in order to comply with Act AI if we are starting a new product that will implement artificial intelligence(AI) at any stage of the project.

One solution to Visibility issue the so called Eplainable AI(ot XAI for short). So as per tradition first we need to research what an XAI is and how it is implemented.

To achieve one universal definition to a new problem is challenging so here is one that is used by IBM:
“Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.

Explainable AI is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making. Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production. AI explainability also helps an organization adopt a responsible approach to AI development.” https://www.ibm.com/think/topics/explainable-ai

So from the above we need an approach that will guarantee visibility(for example neural networks are considered pure black boxes) and the ability to say how the Machine Learning(ML) algorithm calculated the outcome. Here is a list of what XAI shall provide as level of thrust:

  1. Prediction accuracy

The challenge here is the fact that every AI model is complex and often they produce inaccurate results. These predictions often are seen as “black box” which is hard for an avarage user to understand and trust.

How the XAI solves this? XAI provide explanations that are pointing out which features are possessing the greates weight in the final decision.

SHAP (Shapley Additive Explanations) actually provides a break down of the contribution of every feature used in the process(fo instance age, ethnicity, education, profession, income, etc.). If for example a requested credit is denied that the system will have to give detailed and reasonable explanation, for example the credit score was the bottle neck in the specific case. This way the candidate will have the chance to understand the reason behind the rejection and work towards improving his credi score in the future before applying again. And why is this important? In some countries even only applying for a credit effects your credi score. A rejection has a greater negative influence.

The impact will give the users the confidence that the decision was fair and the system is reliable.

  1. Traceability

The challenge here is very often it is not clear how the data was collected and transferred into the data used for the models, as well as transferring the data into a final decision.

The solution that is suggested by XAI it full traceability by providing a clear audit trail. It also provides information on the evolution of the model which include the all steps involved.

An example of traceability is the Decision tree. The algorithm is easy to understand and audit. You simply need to follow the data like in the case: Who shall we hire for a position. You have the requirements for a candidate like: education, technologies, skills, experience and puthing all candidates through the algorithm can be traced all the way until the final decision.

This approach provides traceability and in case a bad decision was made it can be traced all the way until the reason is identified.

  1. Decision understanding

The complex AI systems are good at making decisions but the reasons behind the decisions are hard for the humans to understand. Examples are hiring particular candidate, applying specific health treatment, or making specific jugment in cours on a criminal trial.

XAI is provides often with visualisations the path that was spusuite during the process. The end result is for unexoperienced user to understand the decision.

LIME (Local Interpretable Model-agnostic Explanations) is part of XAI can be used to make the decision more understandable for the user. For example a patient can revise the reson why she is diagnosed with a particular decease especially when the case is decision is between two very common illnesses(like a common fly and Covid 19). Alsko this will help the doctors to understand the illness better and distinguish the treatment as well as to ajust it.

  1. Fairness and Bias Mittigation

As mentioned above there are multiple cases where software systems were actually identified as biased. There are different reasons behind this: biased data, biased algorithms, biased expected output.

In this case the XAI is aiming to reduce the bias and if need to identify it at an early stage. These systems also shall be tested against different data in order to make sure that the systems are as fair as they should be.

AIF360 (AI Fairness 360) is offering exactly this. The rules of fairness can be incorporated at the very beginning so for example a hiring system can be considered as fair in selecting potential candidates for a position. The aim of XAI is to mitigate the chance of a system discriminate based on any previous decisions or future features that were not explored yet.

At the end of this discussion we shall look back and identify that XAI is covering some of the weaknesses that AI might have. But also we have to admit that it is a good beginning into implementing the AI Act of EU. The topic will be more and more discussed in the very near future since there are specific deadlines are incorporated within the Act. Everyone involved into delivering AI systems, implementing and using them shall be trained in order to make sure that the AI Act is part of the compliance of the companies that we work for. The discussions are in the table. Implemеntation is loading.