Traditional basketball sports training is highly subjective as coaches manually analyze and train players based on their own curriculum and knowledge. With today’s advancements in computer vision and machine learning, we can now use these technologies to correctly recognize and classify human actions. Researchers have utilized motion tracking systems to analyze human motion in various fields such as motion capture, sign language translation, gesture controls, virtual reality, and even for medical treatments. These systems commonly use RGB-D cameras to capture data due to the features the cameras offer especially their ability to capture depth images. Coupled with machine learning, such a system that can recognize and classify human actions has been more feasible than ever. This study will use a Microsoft Kinect V2 to capture the footage of players performing three maneuvers: the jump shot, free throw, and lay-up. The data would be collected and pre-processed using C#, Kinect SDK, and Kinect PV2 libraries. The model will be able to classify if each maneuver was performed properly or not by tracking the whole body and its parts along with its joints. The proponents will then use Scikit-Learn as the platform to train an Electrical Resistivity Tomography (ERT) model and a Long Short-Term Memory (LSTM) model and find out which model will be more robust for this kind of application.