When implicit feedback recommender systems expose users to items, they influence the users’ choices and, consequently, their own future recommendations. This effect is known as exposure bias, and it can cause undesired effects such as filter bubbles and echo chambers. Previous research has used multinomial logit models to reduce exposure bias through over-exposure on synthesized data. We hypothesized that these findings hold true for human choice data to a limited degree and that advanced discrete choice models further reduced bias. We also investigated whether the composition of choice sets can cause exposure bias. In pursuing our research questions, we collected partially biased human choices in a controlled online user study. In two experiments, we evaluated how discrete choice–based recommender systems and baselines react to over-exposure and to over- and under-competitive choice sets. Our results confirmed that leveraging choice set information mitigates exposure bias. The multinomial logit model reduced exposure bias, comparably with the other discrete choice models. Choice set competitiveness biased the models which did not consider choice alternatives. Our findings suggest that discrete choice models are highly effective at mitigating exposure bias in recommender systems and that existing recommender systems may suffer more exposure bias than previously thought.