In recent years, there has been a steady growth in network bandwidths. This is especially true in scientific and big data environments, where high bandwidth-delay products (BDPs) are common. It is well-understood that legacy TCP (e.g. TCP Reno) is not appropriate for such environments, and several TCP variants were developed to address this shortcoming. These variants, including CUBIC, STCP, and H-TCP, have been studied in some empirical contexts, and some analytical models exist for CUBIC and STCP. However, since these studies were conducted, BDPs further increased, and new bulk data transfer methods have emerged that utilize parallel TCP streams. In view of these new developments, it is imperative to revisit the question: `Which congestion control algorithms are best adapted to current networking environments?' In order to answer this question, (i) we create a general theoretical framework within which to develop mathematical models of TCP variants that account for finite buffer sizes, maximum window constraints, and parallel TCP streams; (ii) we validate the models using measurements collected over a high-bandwidth testbed and achieve low prediction errors; (iii) we find that CUBIC and H-TCP outperform STCP, especially when multiple streams are used.