[PDF][PDF] Towards Improving Causality Mining using BERT with Multi-level Feature Networks.

W Ali, W Zuo, R Ali, G Rahman, X Zuo, I Ullah - KSII Transactions on Internet …, 2022 - itiis.org
W Ali, W Zuo, R Ali, G Rahman, X Zuo, I Ullah
KSII Transactions on Internet & Information Systems, 2022itiis.org
Causality mining in NLP is a significant area of interest, which benefits in many daily life
applications, including decision making, business risk management, question answering,
future event prediction, scenario generation, and information retrieval. Mining those
causalities was a challenging and open problem for the prior non-statistical and statistical
techniques using web sources that required hand-crafted linguistics patterns for feature
engineering, which were subject to domain knowledge and required much human effort …
Abstract
Causality mining in NLP is a significant area of interest, which benefits in many daily life applications, including decision making, business risk management, question answering, future event prediction, scenario generation, and information retrieval. Mining those causalities was a challenging and open problem for the prior non-statistical and statistical techniques using web sources that required hand-crafted linguistics patterns for feature engineering, which were subject to domain knowledge and required much human effort. Those studies overlooked implicit, ambiguous, and heterogeneous causality and focused on explicit causality mining. In contrast to statistical and non-statistical approaches, we present Bidirectional Encoder Representations from Transformers (BERT) integrated with Multilevel Feature Networks (MFN) for causality recognition, called BERT+ MFN for causality recognition in noisy and informal web datasets without human-designed features. In our model, MFN consists of a three-column knowledge-oriented network (TC-KN), bi-LSTM, and Relation Network (RN) that mine causality information at the segment level. BERT captures semantic features at the word level. We perform experiments on Alternative Lexicalization (AltLexes) datasets. The experimental outcomes show that our model outperforms baseline causality and text mining techniques.
itiis.org
以上显示的是最相近的搜索结果。 查看全部搜索结果