10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Franciskus Antonius Alijoyo, N Venkatramana, Omaia AlOmari, Shamim Ahmad Khan, B Kiran Bala Abstract - The Internet of Things (IoT) is becoming a crucial component of many industries, from smart cities to healthcare, in today's networked world. IoT devices are becoming more and more susceptible to security risks, especially zero-day (0day) attacks, which take advantage of undiscovered flaws. The dynamic and dispersed nature of these systems makes it difficult to identify and mitigate these assaults in IoT contexts. This research focuses on a deep learning model that was created and put into use with Python software. It was made especially to do a detection job with great accuracy. The proposed Autoencoder (AE) with Attention Mechanism model demonstrates exceptional performance in detecting zero-day attacks, achieving an accuracy of 99.45%, precision of 98.56%, recall of 98.53%, and an F1 score of 98.21%. The involvement of the attention mechanism helps to focus on the most relevant features, enhancing its efficiency and reducing computational overhead, making it a promising solution for real-time security applications in IoT systems. Compared to previous methods, such as STL+SVM and AE+DNN, the proposed model significantly outperforms the methods. These results highlight its superior ability to identify anomalies with minimal false positives. Because of its resilience, the model is very good at making zero-attacks. The results demonstrate how deep learning may improve IoT systems' security posture by offering proactive, real-time protections against zero-day threats, resulting in safer and more robust IoT environments.
Authors - Freedom Khubisa, Oludayo Olugbara Abstract - This paper presents the development and evaluation of an artificial intelligent (AI) driven web application for detecting maize diseases. The AI application was designed according to the design science methodology to offer accurate and real-time detection of maize diseases through a user-friendly interface. The application used Flask framework and Python programming language, leveraging multiple libraries and Application Programming Interfaces (APIs) to handle aspects such as database, real-time communication, AI models, weather forecast data, and language translation. The application's AI model is a stacked ensemble of cutting-edge deep learning architectures. Technical performance testing was performed using GTmetrix metrics, and the results were remarkable. The WebQual4.0 framework was used to evaluate the application's usability, information quality and service interaction quality. The Cronbach’s alpha (α) reliability measure was applied to assess internal consistency for WebQual4.0, which yielded an acceptable reliability score of 0.6809. The usability analysis showed that users perceived the AI-driven web application as intuitive, with high scores computed for navigation and ease of use. The quality of information was rated positive with users appreciating the reliability and relevance of the maize disease detection results of the AI application. The service interaction indicated potential for enhancement, which is a solicitude also highlighted in qualitative user feed-back that will be considered for future improvement. The study findings generally indicated that our AI application has great potential to improve agricultural practices by providing early maize disease diagnostics and decisive information to aid maize farmers and enhance maize yields.
Authors - Abdulrahman S. Alenizi, Khamis A. Al-Karawi Abstract - The liver is a vital organ responsible for numerous physiological functions in the human body. In recent years, the prevalence of liver diseases has risen significantly worldwide, mainly due to unhealthy lifestyle choices and excessive alcohol use. This illness is made worse by several hepatotoxic reasons. Obesity is the root cause of chronic liver disease. Obesity, undiagnosed viral hepatitis infections, alcohol consumption, increased risk of hemoptysis or hematemesis, renal or hepatic failure, jaundice, hepatic encephalopathy, and many other conditions can all contribute to chronic liver disease. Using machine learning for illness identification, hepatitis, an infection inflating liver tissue, has been thoroughly investigated. Numerous models are employed to diagnose illnesses, but limited research focuses on the connections between hepatitis symptoms. This research intends to examine chronic liver disease through machine learning predictions. It assesses the efficacy of multiple algorithms, including Logistic Regression, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbours (K-NN), and Decision Tree, by quantifying their accuracy, precision, recall, and F1 score. Experiments were performed on the dataset utilising these classifiers to evaluate their efficacy. The findings demonstrate that the Random Forest method attains the highest accuracy at 87.76%, surpassing other models in disease prediction. It also demonstrates superiority in precision, memory, and F1 score. Consequently, the study concludes that the Random Forest model is the most effective for predicting liver disease in its early stages.
Authors - Phuong Thao Nguyen Abstract - In recent years, the application of machine learning (ML) in anomaly detection for auditing and financial error detection has garnered significant attention. Traditional auditing methods, often reliant on manual inspection, face challenges in accuracy and efficiency, especially when handling large datasets. This study explores the integration of ML techniques to enhance the detection of anomalies in financial data specific to Thai Nguyen Province, Vietnam. We evaluate multiple ML algorithms, including supervised models (logistic regression, support vector machines) and unsupervised models (k-means clustering, isolation forest, autoencoders), to identify unusual patterns and potential financial discrepancies. Using financial records and audit reports from Thai Nguyen, the models were trained and tested to assess their accuracy, precision, and robustness. Our findings demonstrate that ML models can effectively detect anomalies and improve error identification compared to traditional methods. This paper provides practical insights and applications for local auditors, highlighting ML’s potential to strengthen financial oversight and fraud prevention within Thai Nguyen. Future research directions are also proposed to enhance model interpretability and address unique challenges in Vietnamese financial contexts.
Authors - Aman Mussa, Madina Mansurova Abstract - The rapid advancement of neural networks has revolutionized multiple domains, as evidenced by the 2024 Nobel Prizes in Physics and Chemistry, both awarded for contributions to neural networks. Large language models (LLMs), such as ChatGPT, have significantly reshaped AI interactions, gaining unprecedented growth and recognition. However, these models still face substantial challenges with low-resource languages like Kazakh, which accounts for less than 0.1% of online content. The scarcity of training data often results in unstable and inaccurate outputs. To address this issue, we present a novel Kazakh language dataset specifically designed for self-instruct fine-tuning of LLMs, comprising 50,000 diverse instructions from internet sources and textbooks. Using Low-Rank Adaptation (LoRa), a parameter-efficient fine-tuning technique, we successfully fine-tuned the LLaMA 2 model on this dataset. Experimental results demonstrate improvements in the model’s ability to comprehend and generate Kazakh text, despite the absence of established benchmarks. This research underscores the potential of large-scale models to bridge the performance gap in low-resource languages and highlights the importance of curated datasets in advancing AI-driven technologies for underrepresented linguistic communities. Future work will focus on developing robust benchmarking standards to further evaluate and enhance these models.
Authors - Eduardo Puraivan, Pablo Ormeno-Arriagada, Steffanie Kloss, Connie Cofre-Morales Abstract - We are in the information age, but also in the era of disinformation, with millions of fake news items circulating daily. Various fields are working to identify and understand fake news. We focus on hybrid approaches combining machine learning and natural language processing, using surface linguistic features, which are independent of language and enable a multilingual approach. Many studies rely on binary classification, overlooking multiclass problems and class imbalance, often focusing only on English. We propose a methodology that applies surface linguistic features for multiclass fake news detection in a multilingual context. Experiments were conducted on two datasets, LIAR (English) and CLNews (Spanish), both imbalanced. Using Synthetic Minority Oversampling Technique (SMOTE), Random Oversampling (ROS), and Random Undersampling (RUS), we observed improved class detection. For example, in LIAR, the classification of the ‘false’ class improved by 43.38% using SMOTE with Adaptive Boosting. In CLNews, the ROS technique with Random Forest raised accuracy to 95%, representing a 158% relative improvement over the unbalanced scenario. These results highlight our approach’s effectiveness in addressing the problem of multiclass fake news detection in an imbalanced, multilingual context.