Abstract
Clickbait headlines have become a nudge in social media and news websites. The methods to identify clickbaits are largely being developed for English. There is a need for the same in other languages as well with the increase in the usage of social media platforms in different languages. In this work, we present an annotated clickbait dataset of 112,657 headlines that can be used for building an automated clickbait detection system for Telugu, a resource-poor language. Our contribution in this paper includes (i) generation of the latest pre-trained language models, including RoBERTa, ALBERT, and ELECTRA trained on a large Telugu corpora of 8,015,588 sentences that we had collected, (ii) data analysis and benchmarking the performance of different approaches ranging from hand-crafted features to state-of-the-art models. We show that the pre-trained language models trained on Telugu outperform the existing pre-trained models viz. BERT-Mulingual-Case [1], XLM-MLM [2], and XLM-R [3] on clickbait task. On a large Telugu clickbait dataset of 112,657 samples, the Light Gradient Boosted Machines (LGBM) model achieves an F1- score of 0.94 for clickbait headlines. For Non-Clickbait headlines, F1-score of 0.93 is obtained which is similar to that of Clickbait class. We open-source our dataset, pre-trained models, and code1