The present study examined basic assumptions embedded in learning models for predicting behavior in decisions based on experience. In such decisions, the probabilities and payoffs are initially unknown and are learned from repeated choice with payoff feedback. We examined combinations of two rules for updating past experience with new payoff feedback and of two choice rule assumptions for mapping experience onto choices. The combination of these assumptions produced four classes of models that were systematically compared. Two methods were employed to evaluate the success of learning models for approximating players’ choices: One was based on estimating parameters from each person’s data to maximize the prediction of choices one step ahead, conditioned by the observed past history of feedback. The second was based on making a priori predictions for the entire sequence of choices using parameters estimated from a separate experiment. The results indicated the advantage of a class of models incorporating decay of previous experience, whereas the ranking of choice rules depended on the evaluation method used.