The AI Act and Data Privacy: Navigating the Future of Ethical AI in the EU

Understanding the AI Act's Impact on Risk, Regulation, and Data Protection in the Age of Intelligent Algorithms

FROM THE CTO'S DESK

Milena Georgieva

2/20/20255 min read

a close up of a window with a building in the background
a close up of a window with a building in the background

The AI Act and Data Privacy: Navigating the Future of Ethical AI in the EU

Understanding the AI Act's Impact on Risk, Regulation, and Data Protection in the Age of Intelligent Algorithms

Since the 30th of November, 2022, the world has embarked on an unprecedented journey towards artificial intelligence. While AI as a concept has been around since 1956, it has only recently gained the attention and investment needed to reshape industries worldwide. This moment marked a new chapter, with companies and users quickly realizing the potential of AI.

On November 30th, 2022, the world experienced a significant shift, as companies and users alike began embracing artificial intelligence in ways never seen before. Companies were highly motivated to embark on this new opportunity. The users started exploring it so they would stay ahead of the time and be even more valuable in the future.

One topic was gravitating around the hype of AI but was not catching the interest of the masses. And that was the ethical side of AI. We somehow accepted that the technology is neither good nor bad. If AI is defined as bad, it will be simply because of the intentions.

Despite the fact that many people are unhappy with the upcoming regulations in the EU, the AI Act is here to stay, and we must all be ready to comply with it.

Let’s discuss what the AI Act is and how it impacts each of us. What we need to know about it. And also, how explainable AI is solving some of the ethical issues with the AI itself.

As mentioned above, the AI Act is here, and you need to consider it, no matter if you are working in the area of AI development.

The first thing that the Act is actually doing is to discuss what AI is. The Act is valid since 2 August 2024 and shall be applied to any system that is identified as AI. Of course, there are different levels of risks depending on the type of the system.
Here is a definition from the Act as it is:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;

A simple and quite precise description. If you consider this description suits your software system, read further and act now because you have only a limited time to implement changes that will guarantee you comply with the regulation.

An interesting topic that the Act covers is the different levels of risk implemented in the systems. Four levels of risks are identified: low, average, high, and forbidden.

So, why might a software system be considered forbidden? Well, in this group, we have the social credit system used in China, for example. You can easily find information on the internet about how it works, what effect it has on people's lives, and how easy it is to be excluded from that system.

Another forbidden-risk system is the real-time identification system used in public areas. Basically, this is a system that can identify people by their facial or any other biometric data, especially when we discuss law enforcement or security purposes, except in very specific, exceptional cases. The reason is the significant impact that this might have on privacy and the right of civil liberties. A disclaimer shall be added here: the AI Act is not a static document that, once issued, will stay the same forever. The EU retains the right to change it if necessary in order to protect its citizens.

The next level is high-risk systems. An example is a hiring system, for that matter. There are many discussions about the fairness of the HR systems used today to select candidates for a position. You can adjust your system to respond to certain skills and qualities of every single candidate that ever applied to your company, but these systems often fail with fairness. Here are some examples from the press that can easily be tracked: Amazon’s AI Recruiting Tool (2018) – the tool was developed with the intention to help HR employees select the best-suited candidate for a given position. However, it was discovered that the system exhibited a gender bias. It was identified that the system was trained with information from a ten-year period where mostly male candidates were included in the data. As a result, the system biased towards candidates that used female-oriented expressions. For example, if it was mentioned in the resume "women's leadership," it was ranked lower, and the candidate barely got the chance to be invited for an interview. Amazon abandoned the system after the issue was identified.
HireVue's Video Interviewing AI (2020) – is another huge scandal demonstrating the reason for these systems to be considered high-risk. The idea of the software was to analyze a candidate during the interview. The aim was to try to predict how loyal the potential employee would be and eventually how successful. It was discovered that the system was actually biased based on age, gender, race, and accent. The system downgraded candidates who were non-native and also women who were using specific expressions, labeling them as less qualified.

There are even more intriguing examples of systems that shall be added to the group of high-risk, and this is Compas (Correctional Offender Management Profiling for Alternative Sanctions). This system aims to predict the probability of someone involved by any means in a crime committing another crime in the future. The data involved is ethnicity, age, and criminal record. The result of the test shows that a black individual will be ranked with a higher risk of committing a crime, while white individuals with similar crime records were marked with a lower risk.

The next level is limited-risk. The systems that fall into this category are not seen as high-risk but still need to be under regulation because of their potential. This includes chatbots and AI systems used for customer service. These systems are subject to transparency, which means the user must be informed that they are interacting with a software system.

For example, recommendation systems like the one used by Netflix should be monitored. Since they can be used for achieving specific goals or revenue, these systems are also subject to regulations. Everyone, being a customer to any company, has the right to choose what to use and should not be told what to purchase or use.

And finally, low-risk systems are, for example, those that are used to filter unwanted emails from our inboxes. Since this is a general good, why do we have to monitor these systems? Well, simply because these systems do not work without mistakes and need to be calibrated occasionally. Sometimes, you buy a domain, and your email is flagged as spam for any other reason in the world. The procedure of taking that email out of the spam list is long and hard.

The second example is any AI-enhanced photo editing software. Since the impact of these software systems is considered low, they will have a lower impact on the fundamental rights of people in the general case.

Let’s remind something stated above, and that’s the fact that this document is not static and any of the above can be changed in the future if there is a necessity. If you are interested in the AI Act, please follow the link [AI Act in the EU - https://ai-act-law.eu]