Abstract
The learning curve is a fundamental empirical measure of perceptual learning. Typically constructed from contrast or difference thresholds estimated from blocks or sessions of hundreds of trials, it may gloss over short-term performance changes such as initial rapid learning or overnight performance improvements. Here, we developed a two-layer hierarchical Bayesian model (HBM) to compute the joint posterior distribution of the learning curves at the population and individual subject levels and a Bayesian inference procedure (BIP) to estimate the posterior distributions of the learning curve for each subject independently. In the HBM, hyperparameters of the contrast thresholds at the population level defined the means of the distributions of the contrast thresholds at the subject level. We applied both procedures to data from two studies that investigated the interaction of feedback and training accuracy in Gabor orientation identification in 1920 trials over six sessions (Liu et al., 2010, 2012). Learning curves were estimated with block sizes of 20, 40, 80, 160, and 320 trials. Averaged across all subjects, the contrast threshold posterior distributions from the HBM exhibited much smaller average credible intervals than those from the BIP across all block sizes, with greater advantage in smaller block sizes. Using the HBM, we found significant learning in all feedback and training accuracy combinations except the low training accuracy without feedback condition across block sizes, consistent with the original studies. Modeling the learning curves with long- and short-term processes (Yang, et al, 2022) identified significant offline gains with 20, 40 and 80 trials/block. The HBM can be used to construct learning curves at finer temporal grains. The fine-grained assessment may allow us to capture short-term performance changes and provide more accurate and precise assessment of specificity and transfer of perceptual learning because of improved assessment of the initial performance in training and transfer.