top of page
  • David Hubert

AI Act: Gender and Race Bias

Racial and gender bias in artificial intelligence (AI) refers to the unjust and disproportionate impact that AI systems can have on individuals based on their race or gender. These biases often stem from the data used to train machine learning models, which may reflect historical and societal inequalities. When biased data is used, AI systems can perpetuate and even exacerbate existing prejudices.

In the context of race, AI algorithms may inadvertently learn and replicate discriminatory patterns present in historical datasets. For example, if a facial recognition system is trained on predominantly light-skinned faces, it may perform poorly when presented with darker-skinned faces, leading to biased outcomes in areas such as criminal identification or hiring processes.

Gender bias in AI arises when algorithms are trained on imbalanced datasets that reflect historical gender disparities. This can manifest in various ways, such as biased language models reinforcing stereotypes or algorithms used in recruitment processes favoring one gender over another.

The biased outcomes from AI systems not only impact individuals but can also contribute to systemic discrimination and reinforce societal inequalities. Addressing these issues requires a multi-faceted approach, including diversifying the teams developing AI systems, implementing fairness-aware algorithms, and critically examining and improving the quality of training data. Additionally, establishing ethical guidelines and regulations to govern the development and deployment of AI technologies is crucial to mitigate the impact of biases.

How does the AI Act address gender and race bias?

Ensuring that AI systems avoid generating or perpetuating bias is of utmost importance. Well-designed and appropriately utilized AI systems can actively contribute to mitigating bias and dismantling existing structural discrimination, leading to fairer and non-discriminatory outcomes, especially in areas like recruitment. The newly mandated requirements in the AI Act are intended to serve this objective.

The new mandatory requirements for all high-risk AI systems aim to serve this purpose. AI systems will have to be technically robust to guarantee that the technology is fit for purpose and false positive/negative results are not disproportionately affecting protected groups (e.g. racial or ethnic origin, sex, age etc.).

High-risk systems will need to be trained and tested with sufficiently representative datasets to minimise the risk of unfair biases embedded in the model and ensure that these can be addressed through appropriate bias detection, correction and other mitigating measures.

They willt also have be traceable and auditable, ensuring that appropriate documentation is kept, including of the data used to train the algorithm that would be key in ex post investigations.


bottom of page