General Words Representation Method for Modern Language Model

Authors

  • Abbas Saliimi Lokman Faculty of Computing, Universiti Malaysia Pahang, Malaysia.
  • Mohamed Ariff Ameedeen Faculty of Computing, Universiti Malaysia Pahang, Malaysia.
  • Ngahzaifa Ab. Ghani Faculty of Computing, Universiti Malaysia Pahang, Malaysia.

DOI:

https://doi.org/10.54554/jtec.2023.15.01.001

Keywords:

Word representation, Word embedding, Language model, QA System

Abstract

This paper proposes a new word representation method emphasizes general words over specific words. The main motivation for developing this method is to address the weighting bias in modern Language Models (LMs). Based on the Transformer architecture, contemporary LMs tend to naturally emphasize specific words through the Attention mechanism to capture the key semantic concepts in a given text. As a result, general words, including question words are often neglected by LMs, leading to a biased word significance representation (where specific words have heightened weight, while general words have reduced weights). This paper presents a case  study, where general words' semantics are as important as specific words' semantics, specifically in the  abstractive answer area within the Natural Language Processing (NLP) Question Answering (QA) domain. Based on the selected case study datasets, two experiments are designed to test the hypothesis that "the significance of general words is highly correlated with its Term Frequency (TF) percentage across various document scales”. The results from these experiments support this hypothesis, justifying the proposed intention of the method to emphasize general words over specific words in any corpus size. The output of the proposed method is a list of token (word)-weight pairs. These generated weights can be used to leverage the significance of general words over specific words in suitable NLP tasks. An example of such task is the question classification process (classifying question text whether it expects factual or abstractive answer). In this context, general words, particularly the question words are more semantically significant than the specific words. This is because the same specific words in different questions might require different answers based on their question words (e.g. "How many items are on sale?" and "What items are on sale?" questions). By employing the general weight values produced by this method, the weightage of question and specific words can be heightened, making it easier for the classification system to differentiate between these questions. Additionally, the token (word)-weight pair list is made available online at https://www.kaggle.com/datasets/saliimiabbas/genwords-weight.

Downloads

Published

2023-03-29

How to Cite

Lokman, A. S., Ameedeen, M. A., & Ab. Ghani, N. (2023). General Words Representation Method for Modern Language Model. Journal of Telecommunication, Electronic and Computer Engineering (JTEC), 15(1), 1–5. https://doi.org/10.54554/jtec.2023.15.01.001