Concurrent learning (CL) is a recently developed adaptive update scheme that can be used to guarantee parameter convergence without requiring persistent excitation. However, this technique requires knowledge of state derivatives, which are usually not directly sensed and therefore must be estimated. A novel integral CL method is developed in this paper that removes the need to estimate state derivatives while maintaining parameter convergence properties. Data recorded online is exploited in the adaptive update law, and numerical integration is used to circumvent the need for state derivatives. The novel adaptive update law results in negative definite parameter error terms in the Lyapunov analysis, provided an online‐verifiable finite excitation condition is satisfied. A Monte Carlo simulation illustrates improved robustness to noise compared to the traditional derivative formulation. The result is also extended to Euler‐Lagrange systems, and simulations on a two‐link planar robot demonstrate the improved performance compared to gradient‐based adaptation laws.