methods work best when their training and test data are drawn from the same distribution.
For many NLP tasks, however, we are confronted with new domains in which labeled data is
scarce or non-existent. In such cases, we seek to adapt existing models from a resourcerich
source domain to a resource-poor target domain. We introduce structural correspondence
learning to automatically induce correspondences among features from different domains …