Guided super-resolution (GSR) of thermal images using visible range images is challenging because of the difference in the spectral-range between the images. This in turn means that there is significant texture-mismatch between the images, which manifests as blur and ghosting artifacts in the super-resolved thermal image. To tackle this, we propose a novel algorithm for GSR based on pyramidal edge-maps extracted from the visible image. Our proposed network has two sub-networks. The first sub-network super-resolves the low-resolution thermal image while the second obtains edge-maps from the visible image at a growing perceptual scale and integrates them into the super-resolution sub-network with the help of attention-based fusion. Extraction and integration of multi-level edges allows the super-resolution network to process texture-to-object level information progressively, enabling more straightforward identification of overlapping edges between the input images. Extensive experiments show that our model outperforms the state-of-the-art GSR methods, both quantitatively and qualitatively.