Scale-mae: A scale-aware masked autoencoder for multiscale geospatial representation learning

CJ Reed, R Gupta, S Li, S Brockman… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large, pretrained models are commonly finetuned with imagery that is heavily augmented to
mimic different conditions and scales, with the resulting models used for various tasks with …

Creating xBD: A dataset for assessing building damage from satellite imagery

R Gupta, B Goodman, N Patel… - Proceedings of the …, 2019 - openaccess.thecvf.com
We present a preliminary report for xBD, a new large-scale dataset for the advancement of
change detection and building damage assessment for humanitarian assistance and …

Quantified Task Misalignment to Inform PEFT: An Exploration of Domain Generalization and Catastrophic Forgetting in CLIP

L Niss, K Vogt-Lowell, T Tsiligkaridis - arXiv preprint arXiv:2402.09613, 2024 - arxiv.org
Foundations models are presented as generalists that often perform well over a myriad of
tasks. Fine-tuning these models, even on limited data, provides an additional boost in task …

[PDF][PDF] Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning

R Gupta, S Li, S Brockman, C Funk, B Clipp, K Keutzer… - AGU23, 2023 - eecs.berkeley.edu
Large, pretrained models are commonly finetuned with imagery that is heavily augmented to
mimic different conditions and scales, with the resulting models used for various tasks with …