10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Sulafa Badi, Salam Khoury, Kholoud Yasin, Khalid Al Marri Abstract - This study investigates consumer attitudes toward Mobility as a Service (MaaS) in the context of the UAE's diverse population, focusing on the factors influencing adoption intentions. A survey of 744 participants was conducted to assess public perceptions, employing hierarchical and non-hierarchical clustering methods to identify distinct consumer segments. The analysis reveals five clusters characterised by varying demographics, travel lifestyles, and attitudes towards MaaS, highlighting the influence of UTAUT2 variables, including performance expectancy, social influence, hedonic motivation, price value, and perceived risk. Among the clusters, ‘Enthusiastic Adopters’ and ‘Convenience-Driven Adopters’ emerge as key segments with a strong reliance on public transport and a willingness to adopt innovative transportation solutions. The findings indicate a shared recognition of the potential benefits of MaaS despite differing opinions on its implementation. This research contributes to the theoretical understanding of MaaS adoption by offering an analytical typology relevant to a developing economy while also providing practical insights for policymakers and transport providers. By tailoring services to meet the unique needs of various consumer segments, stakeholders can enhance the integration of MaaS technologies into the UAE's transportation system. Future research should explore the dynamic nature of public sentiment regarding MaaS to inform ongoing development and implementation efforts.
Authors - Rakhi Bharadwaj, Priyanshi Patle, Bhagyesh Pawar, Nikita Pawar, Kunal Pehere Abstract - The detection of forged signatures is a critical challenge in various fields, including banking, legal documentation, and identity verification. Traditional methods for signature verification rely on handcrafted features and machine learning models, which often struggle to generalize across varying handwriting styles and sophisticated forgeries. In recent years, deep learning techniques have emerged as powerful tools for tackling this problem, leveraging large datasets and automated feature extraction to enhance accuracy. In this literature survey paper, we have studied and analyzed various research papers on fake signature detection, focusing on the accuracy of different deep learning techniques. The primary models reviewed include Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs). We evaluated the performance of these methods based on their reported accuracy on benchmark datasets, highlighting the strengths and limitations of each approach. Additionally, we discussed challenges such as dataset scarcity and the difficulty of generalizing models to detect different types of forgeries. Our analysis provides insights into the effectiveness of these methods and suggests potential directions for future research in improving signature verification systems.
Authors - Shilpa Bhairanatti, Rubini P Abstract - While the rollout of 5G cellular networks will extend into the next decade, there is already significant interest in the technologies that will form the foundation of its successor, 6G. Although 5G is expected to revolutionize our lives and communication methods, it falls short of fully supporting the Internet-of-Everything (IoE). The IoE envisions a scenario where over a million devices per cubic kilometer, both on the ground and in the air, demand ubiquitous, reliable, and low-latency connectivity. 6G and future technologies aim to create a ubiquitous wireless connectivity for entire communication system. This development will accommodate the rapidly increasing number of intelligent devices and communication demand. These objectives can be achieved by incorporating THz band communication, wider spectrum resources with minimized communication error. However, this communication technology faces several challenges such as energy efficiency, resource allocation, latency etc., which needs to be addressed to improve the overall communication performance. To overcome these issues, we present a roadmap for Point to Point (P2P) and Point-to-Multipoint (P2MP) communication where channel coding mechanism is introduced by considering Turbo channel coding scheme as base approach. Furthermore, deep learning based training is provided to improve the error correcting performance of the system. The performance of proposed model is measured in terms of BER for varied SNR levels and additive white noise channel distribution scenarios, where experimental analysis shows that the proposed coding approach outperformed existing error correcting schemes.
Authors - Hiep. L. Thi Abstract - A brief summary of the paper, highlighting key points such as the increasing role of UAVs in various sectors, the challenges related to data storage on UAVs, and proposed solutions for improving both the efficiency and security of data management. Include a note on the scope of the study, methodologies, and key findings.
Authors - Gareth Gericke, Rangith B. Kuriakose, Herman J. Vermaak Abstract - Communication architectures are demonstrating their significance in the development landscape of the Fourth industrial revolution. Nonetheless, the progress of architectural development lags behind that of the Fourth industrial revolution itself, resulting in subpar implementations and research gaps. This paper examines the prerequisites of Smart Manufacturing and proposes the utilization of a novel communication architecture to delineate a pivotal element, information appropriateness, showcasing its efficient application in this domain. Information appropriateness, leverages pertinent information within the communication flow at a machine level facilitating real-time monitoring, decision-making, and control over production metrics. The metrics scrutinized herein include production efficiency, bottleneck mitigation, and network intelligence, while accommodating architectural scalability. These metrics are communicated and computed at a machine level to assess the efficacy of a communication architecture at this level, while also investigating its synergistic relationship across other manufacturing tiers. Results of this ongoing study shed insights into data computation and management at the machine level and demonstrate an effective approach for handling pertinent information at critical junctures. Furthermore, the adoption of a communication architecture helps minimize information redundancy and overhead in both transmission and storage for machine level communication.
Authors - Y. Abdelghafur, Y. Kaddoura, S. Shapsough, I. Zualkernan, E. Kochmar Abstract - Early reading comprehension is crucial for academic success, involving skills like making inferences and critical analysis, and the Early Grade Reading Assessment (EGRA) toolkit is a global standard for assessing these abilities. However, creating stories that meet EGRA's standards is time-consuming and labour-intensive and requires expertise to ensure readability, narrative coherence, and educational value. In addition, creating these stories in Arabic is challenging due to the limited availability of high-quality resources and the language's complex morphology, syntax, and diglossia. This research examines the use of large language models (LLMs), such as GPT-4 and Jais, to automate Arabic story generation, ensuring readability, narrative coherence, and cultural relevance. Evaluations using Arabic readability formulas (OSMAN and SAMER) show that LLMs, particularly Jais and GPT, can effectively produce high-quality, age-appropriate stories, offering a scalable solution to support educators and enhance the availability of Arabic reading materials for comprehension assessment.
Authors - Md Fahim Afridi Ani, Abdullah Al Hasib, Munima Haque Abstract - This research explores the possibility of improving insect farming by integrating Artificial Intelligence (AI) unlocking the complicated relationship between butterflies and plants they pollinate to reconsider the way species are classified and helping to redraw farming practices for the butterflies. Traditional methods of butterfly classification are morphologically and behaviorally intensive, thus mostly very time-consuming to conduct considering that most of them have a high level of subjective interpretation. We therefore apply our approach to ecological interactions involving butterfly species and their respective plants for efficient data-driven solutions. This also focuses on the application of AI in making full benefits from butterfly farming, trying to determine where each species will be best located. The system will, therefore, classify and manage butterflies with much more ease, saving time and energy usually used in conventional classification methods hence on to the farmer or industrial client. The research deepens the understanding of insect-plant relationships for better forecasting of butterfly behavior and, therefore, healthier ecosystems through optimized pollination and habitat balance. For that purpose, a dataset of butterfly species and related plants was developed, on which machine learning models were applied, including decision trees, random forests, and neural networks. It tuned out that the neural network outperformed the others with an accuracy of 93%. Apart from classification, it helps in the identification of a habitat to provide the best conditions possible for the rearing of butterflies. Application of AI in this field simplifies the work of butterfly farming hence being an important tool to be used in improving growth and conservation of biodiversity. Integrating machine learning into ecological research and industry provides scalable, time-efficient solutions for the classification of species toward the sustainable farming of butterflies.
Authors - Zachary Matthew Alabastro, Stephen Daeniel Mansueto, Joseph Benjamin Ilagan Abstract - Product innovation is critical in strategizing business decisions in highly-competitive markets. For product enhancements, the entrepreneur must garner data from a target demographic through research. A solution to this involves qualitative customer feedback. The study proposes the viability of artificial intelligence (AI) as a co-pilot model to simulate synthetic customer feedback with agentic systems. Prompting with ChatGPT-4o’s homo silicus attribute can generate feedback on certain business contexts. Results show that large language models (LLM) can generate qualitative insights to utilize in product innovation. Results seem to generate human-like responses through few-shot techniques and Chain-of-Thought (CoT) prompting. Data was validated with a Python script. Cosine similarity tested the similarity of datasets to quantify the juxtaposition of synthetic and actual customer feedback. This model can be essential in reducing the total resources needed for product evaluation through preliminary analysis, which can help in sustainable competitive advantage.
Authors - Jeehaan Algaraady, Mohammad Mahyoob Albuhairy Abstract - Sarcasm, a sentiment often used to express disdain, is the focus of our comprehensive research. We aim to explore the effectiveness of various machine learning and deep learning models, such as Support Vector Machine (SVM), Recurrent Neural Networks (RNN), Bidirectional Long Short-Term Memory (BiLSTM), and fine-tuning pre-trained transformer-based mode (BERT) models, for detecting sarcasm using the News Headlines dataset. Our thorough framework investigates the impact of the DistilBert method for text embeddings on enhancing the accuracy of the DL models (RNN and LSTM) for training and classification. To assess the highest values of the proposed models, the authors utilized the four-performance metrics: F1 score, recall, precision, and accuracy. The outcomes revealed that incorporating the BERT model achieves outstanding performance and outperforms other models for an impressive sarcasm classification with a state-of-the-art F1 score of 98%. The outcomes revealed that the F1 scores for SVM, BiSLTM, and RNN are 93%, 95.05%, and 95.52%, respectively. Our experiment on the News Headlines dataset demonstrates that incorporating Distil-Bert to process the word vector enhances the performance of RNN, and BiLSTM notably improves their accuracy. The accuracy of the BiLSTM and RNN models when incorporating FT-IDT, Word2Vec, and GLoVe embeddings scored 93.9% and 93.8%, respectively. In contrast, these scores increased to 95.05% and 95.52% when these models incorporated Distil-Bert for text embedding. This augmentation can be recognized to the capability of Distil-Bert to acquire contextual information and semantic relationships between words, thereby enriching the word vector representation.
Authors - Lois Abigail To, Zachary Matthew Alabastro, Joseph Benjamin Ilagan Abstract - Customer development (CD) is a Lean Startup (LS) methodology for startups to validate their business hypotheses and refine their business model based on customer feedback. This paper proposes designing a large language model-based multi-agent system (LLM MAS) to enhance the customer development process by simulating customer feedback. Using LLMs’ natural language understanding (NLU) and synthetic multi-agent capabilities, startups can conduct faster validation while obtaining preliminary insights that may help refine their business model before engaging with the real market. The study presents a model in which the LLM MAS simulates customer discovery interactions between a startup founder and potential customers, together with design considerations to ensure real-world accuracy and alignment with CD. If carefully designed and implemented, the model may serve as a useful co-pilot that accelerates the customer development process.
Authors - Prince Kelvin Owusu, George Oppong Ampong, Joseph Akwetey Djossou, Gibson Afriyie Owusu, Thomas Henaku, Bless Ababio, Jean Michel Koffel Abstract - In today's dynamic digital landscape, understanding customer opinions and sentiments has become paramount for businesses striving to maintain competitiveness and foster customer loyalty. However, the banking sector in Ghana faces challenges in effectively harnessing innovative technologies to grasp and respond to customer sentiments. This study aims to address this gap by investigating the application of ChatGPT technology within Ghanaian banks to augment customer service and refine sentiment analysis in real-time. Employing a mixed-method approach, the study engaged (40) representatives including IT specialists, data analysts, and customer service managers from (4) banks in Ghana through interviews. Additionally, (160) customers, (40) from each bank, participated in a survey. The findings revealed a significant misalignment between customer expectations and current service provisions. To bridge this gap, the integration of ChatGPT technology is proposed, offering enhanced sentiment analysis capabilities. This approach holds promise for elevating customer satisfaction and fostering loyalty within Ghana's competitive banking landscape.
Authors - Japheth Otieno Ondiek, Kennedy Ogada, Tobias Mwalili Abstract - This experiment models the implementation of distance metrics and three-way decisions for K-Nearest Neighbor classification (KNN). KNN as a machine learning method has inherent classification deficits due to high computing power, outliers and the curse of dimensionality. Many researchers have experimented and found that a combination of various algorithmic methods can lead to better results in prediction and forecasting fields. In this experimentation, we used the combination and strengths of the Euclidean metric distance to develop and evaluate computing query distance for nearest neighbors using weighted three-way decision to model a highly adaptable and accurate KNN technique for classification. The implementation is based on experimental design method to ascertain the improved computed Euclidean distance and weighted three-way decisions classification to achieve better computing power and predictability through classification in the KNN model. Our experimental results revealed that distance metrics significantly affects the performance of KNN classifier through the choice of K-Values. We found that K-Value on the applied datasets tolerate noise levels to ascertain degree while some distance metrics are less affected by the noise levels. This experiment primarily focused on the findings that best K-value from distance metrics measure guarantees three way KNN classification accuracy and performance. The combination of best distance metrics and three-way decision model for KNN classification algorithm has shown improved performance as compared with other conventional algorithm set-ups making in more ideal for classification in the context of this experiment. It outperforms KNN, ANN, DT, NB and the SVM from the crop yielding datasets applied in the experiment.
Authors - Indika Udagedara, Brian Helenbrook, Aaron Luttman Abstract - This paper presents a reduced order modeling (ROM) approach for radiation source identification and localization using data from a limited number of sensors. The proposed ROM method comprises two primary steps: offline and online. In the offline phase, a spatial-energetic basis representing the radiation field for various source compositions and positions is constructed. This is achieved using a stochastic approach based on principal component analysis and maximum likelihood estimation. The online step then leverages these basis functions for determining the complete radiation field from limited data collected from only a few detectors. The parameters are estimated using Bayes rule with a Gaussian prior. The effectiveness of the ROM approach is demonstrated on a simplified model problem using noisy data from a limited number of sensors. The impact of noise on the model’s performance is analyzed, providing insights into its robustness. Furthermore, the approach was extended to real-world radiation detection scenarios, demonstrating that these techniques can be used to localize and identify the energy spectra of mixed radiation sources, composed of several individual sources, from noisy sensor data collected at limited locations.
Authors - Shima Pilehvari, Wei Peng, Yasser Morgan, Mohammad Ali Sahraian, Sharareh Eskandarieh Abstract - Overfitting is a common problem during model training, particularly for binary medical datasets with class imbalance. This research specifically addresses this issue in predicting Multiple Sclerosis (MS) progression, with the primary goal of improving model accuracy and reliability. By investigating various data resampling techniques, ensemble methods, feature extraction, and model regularization, the study thoroughly evaluates the effectiveness of these strategies in enhancing stability and performance for highly imbalanced datasets. Compared to prior studies, this research advances existing approaches by integrating Kernel Principal Component Analysis (KPCA), moderate under-sampling, Synthetic Minority Oversampling Technique (SMOTE), and post-processing techniques, including Youden’s J Statistic and manual threshold adjustments. This comprehensive strategy significantly reduced overfitting while improving the generalization of models, particularly the Multilayer Perceptron (MLP), which achieved an Area Under the Curve (AUC) of 0.98—outperforming previous models in similar applications. These findings establish important best practices for developing robust prognostic models for MS progression and underscore the importance of tailored solutions in complex medical prediction tasks.
Authors - Ain Nadhira Mohd Taib, Fauziah Zainuddin, M. Rahmah Abstract - This paper presents AdaptiCare4U, an interactive mental health assessment in high school settings. By integrating adaptive technique with an establish mental health assessment instrument in a user-friendly format, Adap-tiCare4U improves the experience in answering mental health assessment. Through expert review validation technique, AdaptiCare4U demonstrates high effectiveness in accessibility, ease of use, and practical value with mean scores of 5, 4.2, and 4.4 respectively. Additionally, students’ perception further supports the tool’s usability, with positive feedback highlighting its engaging interface, use of multimedia elements, and stress-reducing design. A favorable usability rating from both students and experts makes AdaptiCare4U a promising tool for aiding counselors in conducting efficient mental health assessments.
Authors - Aayush Kulkarni, Mangesh bedekar, Shamla Mantri Abstract - This paper proposes a novel serverless computing model that addresses critical challenges in current architectures, namely cold start latency, resource inefficiency, and scalability limitations. The research integrates advanced caching mechanisms, intelligent load balancing, and quantum computing techniques to enhance serverless platform performance. Advanced distributed caching with coherence protocols is implemented to mitigate cold start issues. An AI-driven load balancer dynamically allocates resources based on real-time metrics, optimizing resource utilization. The integration of quantum computing algorithms aims to accelerate specific serverless workloads. Simulations and comparative tests demonstrate significant improvements in latency reduction, cost efficiency, scalability, and throughput compared to traditional serverless models. While quantum integration remains largely theoretical, early results suggest potential for substantial performance gains in tasks like function lookups and complex data processing. This research contributes to the evolving landscape of cloud computing, offering insights into optimizing serverless architectures for future applications in edge computing, AI, and data-intensive fields. The proposed model sets a foundation for more efficient, responsive, and scalable cloud solutions.
Authors - Nouha Arfaoui, Mohmed Boubakir, Jassem Torkani, Joel Indiana Abstract - The increasing reliance on surveillance systems and the vast amounts of video data have created a growing need for automated systems to detect violent and aggressive behaviors in real-time. Manual video analysis is not only labor-intensive but also prone to errors, particularly in large-scale monitoring situations. Machine learning and deep learning have gained significant attention for their ability to enhance the detection accuracy and efficiency of violence in images and videos. Violence is a critical societal issue, occurring in public spaces, workplaces, and social environments, and is a leading cause of injury and death. While video surveillance is a key tool for monitoring such behaviors, manual monitoring remains inefficient and subject to human fatigue. Early ML methods relied on manual feature extraction, which limited their flexibility in dynamic scenarios. Ensemble techniques, including AdaBoost and Gradient Boosting, provided improvements but still required extensive feature selection. The introduction of deep learning, particularly Convolutional Neural Networks (CNNs), has enabled automatic feature learning, making them more effective in violence detection tasks. This study focuses on detecting violence and aggression in workplace settings by addressing key aspects such as violent actions, and aggressive objects, utilizing various deep learning algorithms to identify the most efficient model for each task.
Authors - Kalupahanage A. G. A, Bulathsinhala D.N, Herath H.M.S.D, Herath H.T.M.T, Shashika Lokuliyana, Deemantha Siriwardana Abstract - The explosive growth of the Internet of Things (IoT) has had a substantial impact on daily life and businesses, allowing for real-time monitoring and decision-making. However, increased connectivity also brings higher security risks, such as botnet attacks and the need for stronger user authentication. This research explores how machine learning can enhance Internet of Things security by identifying abnormal activity, utilizing behavioral biometrics to secure cloud-based dashboards, and detecting botnet threats early. Researchers tested numerous machine learning methods, including K-Nearest Neighbors (KNN), Decision Trees, and Logistic Regression, on publicly available datasets. The Decision Tree model earned an impressive accuracy rate of 73% for anomaly identification, proving its supremacy in dealing with complex security risks. Research findings show the effectiveness of these strategies in enhancing the security and reliability of IoT devices. This study provides significant insights into the use of machine learning to protect Internet of Things devices while also addressing crucial concerns such as power consumption and privacy.
Authors - A B Sagar, K Ramesh Babu, Syed Usman, Deepak Chenthati, E Kiran Kumar, Boppana Balaiah, PSD Praveen, G Allen Pramod Abstract - Agricultural disasters, mostly ones caused by biological threats, pose severe threats to global food security and economic stability. Early detection and effective management are essential for mitigating these risks. In this research paper we propose a comprehensive disaster prediction and management framework integrating any of the resources like social networks or Internet of Things (IoT) for data collection. The model combines real-time data collection, risk assessment, and decision-making processes to forecast agricultural disasters and suggest mitigation strategies. The mathematical foundation of this model defines relationship between key variables, such as plant species, infestation agent species, tolerance levels, and infestation rates. The system relies on IoT or mobile-based social network agents for data collection at the ground level, to get precise and consistent information from diverse geographic regions. The model further includes a hierarchical risk assessment process that identifies, evaluates, and assesses risks based on predefined criteria, enabling informed decision-making for disaster mitigation. Multiplant species and multi-infestation agent interactions are also considered to capture the complexities of agricultural systems. The proposed framework provides a scalable approach to predicting and managing agricultural disasters, particularly targeting biological threats. By incorporating real-time data and dynamic decision-making mechanisms, the model considerably improves the resilience of agricultural systems against both localized and large-scale threats.
Authors - Herrera Nelson, Paul Francisco Baldeon Egas, Gomez-Torres Estevan, Sancho Jaime Abstract - Quito, the capital of Ecuador, is the economic core of the country where commercial, administrative, and tourist activities are concentrated. With population growth, the city has undergone major transformations resulting in traffic congestion problems that affect health, cause delays in daily activities, and increase pollution levels among other inconveniences. Over time, important mobility initiatives have been implemented such as traffic control systems, monitoring, construction of peripheral roads, and the "peak and license plate" measure that restricts the use of vehicles during peak hours according to their license plate, a strategy also adopted in several Latin American countries. However, these actions have not been enough, and congestion continues to increase, causing discomfort to citizens. Given this situation, the implementation of a low-cost computer application has been proposed that allows identifying traffic situations in real time and making decisions to improve this problem using processed data from the social network Twitter and traffic records from the city of Quito.
Authors - Elissa Mollakuqe, Hasan Dag, Vesa Mollakuqe, Vesna Dimitrova Abstract - Groupoids are algebraic structures, which generalize groups by allowing partial symmetries, and are useful in various fields, including topology, category theory, and algebraic geometry. Understanding the variance explained by Principal Component Analysis (PCA) components and the correlations among variables within groupoids can provide valuable insights into their structures and relationships. This study aims to explore the use of PCA as a dimensionality reduction technique to understand the variance explained by different components in the context of groupoids. Additionally, we examine the interrelationships among variables through a color-coded correlation matrix, facilitating insights into the structure and dependencies within groupoid datasets. The findings contribute to the broader understanding of data representation and analysis in mathematical and computational frameworks.
Authors - Laurent BARTHELEMY Abstract - In 2024 [7], the author proposed a calculation of weather criteria for vessel boarding against the ladder of an offshore wind turbine, based on a regular wave. However international guidelines [2] prescribe that "95% waves pass with no slip above 300mm (or one ladder rung)". In order to meet such acceptability criteria, it becomes necessary to investigate boarding under a real state, which is an irregular wave. The findings meet the results from other publications [6] [7]. The outcome then is to propose boarding optimisation strategies, compared to present professional practises. The purpose is to achieve less gas emissions, by minimising fuel consumption.
Authors - Amro Saleh, Nailah Al-Madi Abstract - Machine learning (ML) enables valuable insights from data, but traditional ML approaches often require centralizing data, raising privacy and security concerns, especially in sensitive sectors like healthcare. Federated Learning (FL) offers a solution by allowing multiple clients to train models locally without sharing raw data, thus preserving privacy while enabling robust model training. This paper investigates using FL for classifying breast ultrasound images, a crucial task in breast cancer diagnosis. We apply a Convolutional Neural Network (CNN) classifier within an FL framework, evaluated through methods like FedAvg on platforms such as Flower and TensorFlow. The results show that FL achieves competitive accuracy compared to centralized models while ensuring data privacy, making it a promising approach for healthcare applications.
Authors - Ahmed D. Alharthi, Mohammed M. Tounsi Abstract - The Hajj pilgrimage represents one of the largest mass gatherings globally, posing substantial challenges in terms of health and safety management. Millions of pilgrims converge each year in Saudi Arabia to fulfil their religious obligations, underscoring the critical need to address the various health risks that may emerge during such a large-scale event. Health volunteering plays a pivotal role in delivering timely and high-quality medical services to pilgrims. This study introduces the Integrated Health Volunteering (IHV) framework, designed to enhance health and safety outcomes through an optimised, rapid response system. The IHV framework facilitates the coordinated deployment of healthcare professionals—including doctors, anaesthetists, pharmacists, and others—in critical medical emergencies such as cardiac arrest and severe haemorrhage. Central to this framework is the integration of advanced technologies, including Artificial Intelligence algorithms, to support health volunteers’ decision-making. The framework has been validated and subjected to accuracy assessments to ensure its efficacy in real-world situations, particularly in the context of mass gatherings like the Hajj.