Traditionally, the sliding window based activity recognition chain (ARC) has been dominating practical applications, in which features are carefully optimized towards scenario specifics. Recently, end-to-end, deep learning methods, that do not discriminate between representation learning and classifier optimization, have become very popular also for HAR using wearables, promising "out-of-the-box" modeling with superior recognition capabilities. In this paper, we revisit and analyze specifically the role feature representations play in HAR using wearables. In a systematic exploration we evaluate eight different feature extraction methods, including conventional heuristics and recent representation learning methods, and assess their capabilities for effective activity recognition on five benchmarks. Optimized feature learning integrated into the conventional ARC leads to comparable if not better recognition results as if using end-to-end learning methods, while at the same time offering practitioners more flexibility to optimize their systems towards specifics of wearables and their constraints and limitations.