10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Authors - Amr Abu Alhaj, Omar Safwat, Youssef Ghoneim, Imran Zualkernan, Ali Reza Sajun Abstract - This paper examines the use of pre-trained models like Bidirectional Encoder Representations from Transformers (BERT) and A Robustly Optimized BERT Pretraining Approach (RoBERTa) to create reliable models for detecting fake news from media articles. Traditional Machine Learning (ML) methods frequently have difficulties in accurately identifying the nuances of misinformation due to extensive feature engineering dependencies. The latest advancements in Large Language Models (LLMs) such as BERT and RoBERTa have fundamentally transformed misinformation detection by providing deep context. The research utilizes the LIAR dataset, containing 12.8k manually labeled statements from PolitiFact.com, along with associated metadata and speaker credit scores. The approach combines BERT/RoBERTa embeddings with complementary architectures for binary classification, introducing a credit-score calculation reflecting speakers’ historical truthfulness. Notably, BERT-BiLSTM-CNN-FC and RoBERTa-BiLSTM-CNNFC configurations achieved state-of-the-art F1-scores of 0.76 and 0.74, respectively.