This paper proposes an intelligent text classification framework for a resource-constrained language like Bengali, which is considered a challenging task due to the lack of standard …
Abstract Recently, Word Embeddings have been introduced as a major breakthrough in Natural Language Processing (NLP) to learn viable representation of linguistic items based …
In recent years, the amount of digital text contents or documents in the Bengali language has increased enormously on online platforms due to the effortless access of the Internet via …
The literature has not fully and adequately explained why contextual (eg, BERT-based) representations are so successful to improve the effectiveness of some Natural Language …
C Sun, X Qiu, Y Xu, X Huang - … : 18th China national conference, CCL 2019 …, 2019 - Springer
Abstract Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT …
Due to the expansion of data generation, more and more natural language processing (NLP) tasks are needing to be solved. For this, word representation plays a vital role …
Word embeddings, which represent words as numerical vectors in a high-dimensional space, are contextualized by generating a unique vector representation for each sense of a …
C Wang, P Nulty, D Lillis - … of the 4th international conference on natural …, 2020 - dl.acm.org
Word embeddings act as an important component of deep models for providing input features in downstream language tasks, such as sequence labelling and text classification …
This article studies convolutional neural networks for Tigrinya (also referred to as Tigrigna), which is a family of Semitic languages spoken in Eritrea and northern Ethiopia. Tigrinya is a …