Differential Evolution (DE) is a population-based stochastic global optimization technique that requires the adjustment of a very few parameters in order to produce results. However, the control parameters involved in DE are highly dependent on the optimization problem; in practice, their fine-tuning is not always an easy task. The self-adaptive differential evolution (SADE) variants are those that do not require the pre-specified choice of control parameters. On the contrary, control parameters are selfadapted by using the previous learning experience. In this paper, we discuss and evaluate popular common and self-adaptive differential evolution (DE) algorithms. In particular, we present an empirical comparison between two self-adaptive DE variants and common DE methods. In order to assure a fair comparison, we test the methods by using a number of well-known unimodal and multimodal, separable and non-separable, benchmark optimization problems for different dimensions and population size. The results show that SADE variants outperform, or at least produce similar results, to common differential evolution algorithms in terms of solution accuracy and convergence speed. The advantage of using the self-adaptive methods is that the user does not need to adjust control parameters. Therefore, the total computational effort is significantly reduced.