Current neural network based community question answering (cQA) systems fall short of (1) properly handling long answers which are common in cQA;(2) performing under small data conditions, where a large amount of training data is unavailable—ie, for some domains in English and even more so for a huge number of datasets in other languages; and (3) benefiting from syntactic information in the model—eg, to differentiate between identical lexemes with different syntactic roles. In this paper, we propose COALA, an answer selection approach that (a) selects appropriate long answers due to an effective comparison of all question-answer aspects,(b) has the ability to generalize from a small number of training examples, and (c) makes use of the information about syntactic roles of words. We show that our approach outperforms existing answer selection models by a large margin on six cQA datasets from different domains. Furthermore, we report the best results on the passage retrieval benchmark WikiPassageQA.