Illumination estimation is the essential step of computational color constancy, one of the core parts of various image processing pipelines of modern digital cameras. Having an accurate and reliable illumination estimation is important for reducing the illumination influence on the image colors. To motivate the generation of new ideas and the development of new algorithms in this field, two challenges on illumination estimation were conducted. The main advantage of testing a method on a challenge over testing it on some of the known datasets is the fact that the ground‐truth illuminations for the challenge test images are unknown up until the results have been submitted, which prevents any potential hyperparameter tuning that may be biased. The First illumination estimation challenge (IEC#1) had only a single task, global illumination estimation. The second illumination estimation challenge (IEC#2) was enriched with two additional tracks that encompassed indoor and two‐illuminant illumination estimation. Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest‐like markup for the images from the Cube++ dataset. This article focuses on the description of the past two challenges, algorithms which won in each track, and the conclusions that were drawn based on the results obtained during the first and second challenge that can be useful for similar future developments.