Federated learning (FL) represents a distributed machine learning approach that leverages a centralized server to train models while keeping the data on edge devices isolated. FL has the benefits of preserving data privacy and improving model accuracy. However, the occurrence of unexpected device exits during model training can severely impact the performance of the models. To address the communication overhead issue and accelerate model convergence, a novel adaptive FL with a negative inner product aggregation approach, namely, NIPAFed is proposed in this article. The NIPAFed leverages a congestion control algorithm inspired by TCP, known as additive multiplication subtraction strategy, to adaptively predict the workload of devices based on historical workload. So NIPAFed effectively mitigates the impact of stragglers on the training process. Additionally, to reduce communication overhead and latency, a negative inner product aggregation strategy is employed to accelerate model convergence and minimize the number of communication rounds required. The convergence of the model is also analyzed theoretically. The validity of NIPAFed is tested on federated public data sets and the NIPAFed is compared with some algorithms. The experimental results clearly demonstrate the superiority of the NIPAFed in terms of performance. By reducing device dropouts and minimizing the communication rounds, the NIPAFed effectively controls the communication overhead while the convergence is ensured.