Human-centric multimodal machine learning: Recent advances and testbed on AI-based recruitment

A Peña, I Serna, A Morales, J Fierrez, A Ortega… - SN Computer …, 2023 - Springer
The presence of decision-making algorithms in society is rapidly increasing nowadays,
while concerns about their transparency and the possibility of these algorithms becoming …

Demographic bias in biometrics: A survey on an emerging challenge

P Drozdowski, C Rathgeb, A Dantcheva… - … on Technology and …, 2020 - ieeexplore.ieee.org
Systems incorporating biometric technologies have become ubiquitous in personal,
commercial, and governmental identity management applications. Both cooperative (eg …

A comprehensive study on face recognition biases beyond demographics

P Terhörst, JN Kolf, M Huber… - … on Technology and …, 2021 - ieeexplore.ieee.org
Face recognition (FR) systems have a growing effect on critical decision-making processes.
Recent works have shown that FR solutions show strong performance differences based on …

Face recognition: too bias, or not too bias?

JP Robinson, G Livitz, Y Henon, C Qin… - Proceedings of the …, 2020 - openaccess.thecvf.com
We reveal critical insights into problems of bias in state-of-the-art facial recognition (FR)
systems using a novel Balanced Faces in the Wild (BFW) dataset: data balanced for gender …

SensitiveNets: Learning agnostic representations with application to face images

A Morales, J Fierrez, R Vera-Rodriguez… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
This work proposes a novel privacy-preserving neural network feature representation to
suppress the sensitive information of a learned space while maintaining the utility of the …

InsideBias: Measuring bias in deep networks and application to face gender biometrics

I Serna, A Pena, A Morales… - 2020 25th International …, 2021 - ieeexplore.ieee.org
This work explores the biases in learning processes based on deep neural network
architectures. We analyze how bias affects deep learning processes through a toy example …

[HTML][HTML] Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning

I Serna, A Morales, J Fierrez, N Obradovich - Artificial Intelligence, 2022 - Elsevier
We propose a discrimination-aware learning method to improve both the accuracy and
fairness of biased face recognition algorithms. The most popular face recognition …

Bias in multimodal AI: Testbed for fair automatic recruitment

A Pena, I Serna, A Morales… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
The presence of decision-making algorithms in society is rapidly increasing nowadays,
while concerns about their transparency and the possibility of these algorithms becoming …

Leave-one-out unfairness

E Black, M Fredrikson - Proceedings of the 2021 ACM Conference on …, 2021 - dl.acm.org
We introduce leave-one-out unfairness, which characterizes how likely a model's prediction
for an individual will change due to the inclusion or removal of a single other person in the …

Algorithmic discrimination: Formulation and exploration in deep learning-based face biometrics

I Serna, A Morales, J Fierrez, M Cebrian… - arXiv preprint arXiv …, 2019 - arxiv.org
The most popular face recognition benchmarks assume a distribution of subjects without
much attention to their demographic attributes. In this work, we perform a comprehensive …