General Words Representation Method for Modern Language Model
Keywords:Word representation, Word embedding, Language model, QA System
This paper proposes a new word representation method emphasizes general words over specific words. The main motivation for developing this method is to address the weighting bias in modern Language Models (LMs). Based on the Transformer architecture, contemporary LMs tend to naturally emphasize specific words through the Attention mechanism to capture the key semantic concepts in a given text. As a result, general words, including question words are often neglected by LMs, leading to a biased word significance representation (where specific words have heightened weight, while general words have reduced weights). This paper presents a case study, where general words' semantics are as important as specific words' semantics, specifically in the abstractive answer area within the Natural Language Processing (NLP) Question Answering (QA) domain. Based on the selected case study datasets, two experiments are designed to test the hypothesis that "the significance of general words is highly correlated with its Term Frequency (TF) percentage across various document scales”. The results from these experiments support this hypothesis, justifying the proposed intention of the method to emphasize general words over specific words in any corpus size. The output of the proposed method is a list of token (word)-weight pairs. These generated weights can be used to leverage the significance of general words over specific words in suitable NLP tasks. An example of such task is the question classification process (classifying question text whether it expects factual or abstractive answer). In this context, general words, particularly the question words are more semantically significant than the specific words. This is because the same specific words in different questions might require different answers based on their question words (e.g. "How many items are on sale?" and "What items are on sale?" questions). By employing the general weight values produced by this method, the weightage of question and specific words can be heightened, making it easier for the classification system to differentiate between these questions. Additionally, the token (word)-weight pair list is made available online at https://www.kaggle.com/datasets/saliimiabbas/genwords-weight.
T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space", ArXiv Preprint, ArXiv:1301.3781, 2013.
J. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation", Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543, 2014.
J. Devlin, M. W. Chang, K, Lee, K. and Toutanova, "Bert: Pretraining of deep bidirectional transformers for language understanding", ArXiv Preprint, ArXiv:1810.04805, 2018.
Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le, "XLNet: Generalized Autoregressive Pretraining for Language Understanding", Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 5753–5763, 2019.
Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, and P. G. Allen, "Roberta: A robustly optimised bert pre-training approach", ArXiv Preprint, ArXiv:1907.11692, 2019.
S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, and T. Mihaylov, “Opt: Open pre-trained transformer language models”, ArXiv Preprint, ArXiv:2205.01068, 2022.
H. Sajjad, N. Durrani, F. Dalvi, F. Alam, A. R. Khan, and J. Xu, "Analyzing encoded concepts in transformer language models." ArXiv Preprint, ArXiv:2206.13289, 2022.
S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J. B. Lespiau, B. Damoc, A. Clark, and D. De Las Casas, "Improving language models by retrieving from trillions of tokens." International conference on machine learning. PMLR, 2022.
A. Haviv, O. Ram, O. Press, P. Izsak, and O. Levy, "Transformer Language Models without Positional Encodings Still Learn Positional Information." ArXiv preprint, ArXiv:2203.16634, 2022.
H. B. Zia, I. Castro, A. Zubiaga, and G. Tyson, "Improving Zero-Shot Cross-Lingual Hate Speech Detection with Pseudo-Label Fine-Tuning of Transformer Language Models." Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16. 2022.
A. Vaswani, G. Brain, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need", Advances in Neural Information Processing Systems, 30, 2017.
R. Li, W. Xiao, L. Wang, H. Jang, and G. Carenini, "T3-Vis: visual analytic for Training and fine-Tuning Transformers in NLP", In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 220-230, 2021, November.
J. R. Firth, "A synopsis of linguistic theory", 1930–1955. Studies in Linguistic Analysis (Special Volume of the Philological Society), 1952(59), pp. 1–32, 1957.
A. S. Lokman, M. A. Ameedeen, and N.A. Ghani, "A Question-Answering System that Can Count", Computational Science and Technology, Lecture Notes in Electrical Engineering, vol. 724, pp. 61–70, 2021.
J. Elman, "Finding Structure in Time", Cognitive Science, 14, pp. 179- 211, 1990.
G. E. Hinton, J. L. McClelland, D. E. Rumelhart, "Distributed representations", Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations, pp. 77–109, 1986.
D. E. Rumelhart, and R. J. Williams, "Learning internal representations by back-propagating errors", Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Eds., 1986.
M. Yatskar, "A qualitative comparison of CoQA, SQuAD 2.0 and QuAC", arXiv preprint, arXiv:1809.10735, 2018. Chicago.
S. Reddy, D. Chen, and C. D. Manning, "CoQA: A Conversational Question Answering Challenge", Transactions of the Association for Computational Linguistics, 7, pp. 249–266, https://doi.org/10.1162/TACL A 00266/43511, 2019.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, "Squad: 100,000+ questions for machine comprehension of text", Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392, https://arxiv.org/abs/1606.05250, 2016.
E. Choi, H. He, M. Iyyer, M. Yatskar, W. Yih, Y. Choi, P. Liang, and L. Zettlemoyer, "QuAC: Question answering in context", ArXiv Preprint, ArXiv:1808.07036, 2018.
How to Cite
TRANSFER OF COPYRIGHT AGREEMENT
The manuscript is herewith submitted for publication in the Journal of Telecommunication, Electronic and Computer Engineering (JTEC). It has not been published before, and it is not under consideration for publication in any other journals. It contains no material that is scandalous, obscene, libelous or otherwise contrary to law. When the manuscript is accepted for publication, I, as the author, hereby agree to transfer to JTEC, all rights including those pertaining to electronic forms and transmissions, under existing copyright laws, except for the following, which the author(s) specifically retain(s):
- All proprietary right other than copyright, such as patent rights
- The right to make further copies of all or part of the published article for my use in classroom teaching
- The right to reuse all or part of this manuscript in a compilation of my own works or in a textbook of which I am the author; and
- The right to make copies of the published work for internal distribution within the institution that employs me
I agree that copies made under these circumstances will continue to carry the copyright notice that appears in the original published work. I agree to inform my co-authors, if any, of the above terms. I certify that I have obtained written permission for the use of text, tables, and/or illustrations from any copyrighted source(s), and I agree to supply such written permission(s) to JTEC upon request.
Ministry of Higher Education, Malaysia
Grant numbers UMP.05/26.10/03/RDU220315