Which acoustic cues are most important for understanding spoken language? Traditionally, the speech signal has been described primarily in spectral terms (ie, the distribution of energy across the acoustic frequency axis). In contrast, temporal properties have largely been ignored. However, there is mounting evidence that low-frequency energy modulations play a crucial role, particularly those below 16 Hz (eg, Houtgast and Steeneken 1985; Drullman et al. 1994; Greenberg et al. 1998; Greenberg and Arai 2004; Christiansen and Greenberg 2005). Modulations higher than 16 Hz may also contribute under certain conditions (Apoux and Bacon 2004; Christiansen and Greenberg 2005; Greenberg and Arai 2004; Silipo et al. 1999). Currently lacking is a detailed understanding of how low-frequency amplitudemodulation cues are combined across the acoustic frequency spectrum, as well as how spectral and temporal information interact. Such knowledge is likely to enhance our understanding of how spoken language is processed in noisy and reverberant environments by both normal and hearingimpaired individuals.