How to pretrain an efficient cross-disciplinary language model: the scilitbert use case
Loading...
Date
2021-12
Journal Title
Journal ISSN
Volume Title
Publisher
Faculty of Information Technology, University of Moratuwa.
Abstract
Transformer based models are widely used in various text processing tasks, such as classification, named entity recognition. The representation of scientific texts is a complicated task, and the utilization of general English BERT models for this task is suboptimal. We observe the lack of models for multidisciplinary academic texts representation, and on a broader scale, a lack of specialized models pretrained on specific domains, for which general English BERT models are suboptimal. This paper introduces ScilitBERT, a BERT model pretrained on an inclusive cross-disciplinary academic corpus. ScilitBERT is half as deep as RoBERTa, and has a much lower pretraining computation cost. ScilitBERT obtains at least 96% of RoBERTa's accuracy on two academic domain downstream tasks. The presented cross-disciplinary academic model has been publicly released11https://github.com/JeanBaptiste-dlb/ScilitBERT. The results obtained show that for domains that use a technolect and have a sizeable amount of raw text data; the pretraining of dedicated models should be considered and favored.
Description
Keywords
Language models, Clustering, Classification, Association rules, Benchmarking, Text analysis
Citation
J. -B. de la Broise, N. Bernard, J. -P. Dubuc, A. Perlato and B. Latard, "How to pretrain an efficient cross-disciplinary language model: The ScilitBERT use case," 2021 6th International Conference on Information Technology Research (ICITR), 2021, pp. 1-6, doi: 10.1109/ICITR54349.2021.9657164.