The intelligent and autonomous learning of patients' activities will lead to an incredible progression toward future smart e-health systems. With the recent advances in artificial intelligence, signal processing, and computational capabilities; light detection and ranging (LiDAR) technology can play a significant role in enhancing the current patients' activity recognition (PAR) systems. In this letter, we propose confidential and accurate patient arms behavior monitoring using a standalone 3-D LiDAR sensor. Due to the unavailability of LiDAR data, we use a computer-programmed 3-D simulator to generate virtual-LiDAR (VLiDAR) 3-D point cloud data that simulates real patient movements. These virtual data are used to train a multilayer-perception (MLP) model to segment the data points of the patient's body into arms versus not arms. We further propose a subsegmentation technique to segment patient's arms point cloud data into upper or lower arms. Finally, we demonstrate uses of arms gesture identification using the proposed scheme. The numerical results show that the proposed MLP model achieves a test accuracy of 90.8% and a cross-validation accuracy of 87.4%.