More and more researchers are beginning to use multiple dissimilarity metrics or image features for medical image registration. In most of these approaches, however, weights for ranking the relative importance between the selected metrics are empirically tuned and fixed for the entire image domain. Different parts of a medical image, however, may contain significantly different appearance properties such that a metric may only be applicable in certain image regions but less so in other regions. In this paper, we propose to adapt this weighting to generate a locally-adaptive set of dissimilarity metrics such that the overall metric set encourages proper spatial alignment. Using contextual information or via a learning procedure, our approach generates a vector weight map that determines, at each spatial location, the relative importance of each constituent of the overall metric. Our approach was evaluated on 2 datasets of 15 computed tomography (CT) lung images and 40 brain magnetic resonance images (MRI). Experiments show that our approach of using a locally-adaptive set of dissimilarity metrics gives superior results when compared against its non-region specific variant.