Person re-identification (ReID) plays a significant role in intelligent surveillance systems. However, it is challenging due to large variations in the intra-class, where the same person is captured in different scenes or cameras. The current person ReID research focuses on creating robust features for class distinction and generalizing neural networks for covering various target domains to address the issue. Recently, after the achievement of vision transformers, the application of transformers has also begun to person ReID studies. The transformer-based methods have improved quantitative performance of person ReID; however, they still suffer from class distinction. Therefore, this paper proposes a novel region-enhanced transformer (REET) to create robust ReID features. Unlike conventional transformer-based approaches, the REET emphasizes the tokens generated by region-level. Our method achieves state-of-the-art results on three public datasets; Market1501, DukeMTMC, and CUHK-03.