Multi-target multi-camera tracking (MTMCT) targets to generate trajectories of the object that appeared under multiple cameras automatically. MTMCT can be treated as a combination of intra-camera tracking and cross-camera tracking. The existing work only employs the global description to perform the tracklet generating. However, the global description cannot model the local similarity between targets, leading to existing methods not to be robust to occlusion and fast motion. To handle the mentioned problem, we propose an online Optical-based Pose Association (OPA) for multi-target multi-camera tracking. The proposed method utilizes local pose matching to solve the occlusion problem, and applies optical flow to reduce the distance caused by fast motion. For optical-based pose association, we firstly employ OpenPose to generate human pose for each proposal. Then, we utilize the optical flow generated by PWC-Net to adjust the estimated pose for the previous frame. Finally, the modified Object Keypoint Similarity is used to compute the similarity between the pose of the current frame and adjusted pose in the prior frame. Once obtaining the optical-based pose similarity, we combine it with the visual and bounding box spatial similarities to generate the final similarity matrix, and apply the Kuhn-Munkras algorithm for data association. The experiments on the MTMCT and MOT datasets verify the rationality of using human pose information and prove the superiority of the proposed method.