(FER) like human computer interfaces or health applications. New approaches using
Machine learning (ML) are achieving successful results, but its use raises concerns related
with biases, fairness or explainability, which can undermine the trust of the users. This work
aims to study how gender biased training datasets alter fairness in FER. The main outcomes
show which facial expressions recognition are more impacted by gender bias.