10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Reena (Mahapatra) Lenka, Smita Mehendale Abstract - This research paper examines this combination to understand better how Artificial Intelligence (AI) and Extended Reality (XR) might work together to change human experiences and capacities. Enhancing immersive environments primarily depends on artificial intelligence (AI), miming human cognitive capabilities. The study conducts a thorough literature review to comprehend the AI-XR synergy's goals, uses, constraints, and viewpoints. It emphasises how AI may complement human labour and how XR can produce multi-dimensional experiences, using examples from the aerospace, construction, and healthcare sectors. The paper describes the influence of these technologies on the nature of work in the future. Also, it focuses on the necessity for companies to create strategies that take advantage of both possibilities and problems. To effectively recruit and develop the talents necessary to merge human and machine efforts in modern workplaces, the H.R. role is growing.
Authors - Byron Albuja-Sanchez, Jeniffer Flores-Toala, Arcesio Bustos-Gaibor, Sandra Arias-Villon Abstract - The problem with predicting a possible risk of fetal death is that it depends on several aspects, not only medical but also economic and social aspects of the pregnant mothers, which makes an early response to this problem very difficult. This work seeks to apply classification algorithms to detect the risk of fetal death based on socioeconomic and demographic data of pregnant mothers in Ecuador, using datasets from 2000 to 2021. Trained algorithms include decision trees, random forests, neural networks, bagging classifiers, k nearest neighbors, and naive bayes bernoulli. As cases of fetal death are very rare, over-sampling and undersampling techniques were applied to train the algorithms. The performance comparison of the trained algorithms was carried out with their respective confusion matrices. The best performance was obtained by the algorithms trained with undersampling and of all of them the performance of the neural network stood out. The best performance of the neural network was associated with its nature of classifying by assigning weights to each input parameter.
Authors - Christos Chronis, Iraklis Varlamis, Konstantinos Tserpes, George Dimitrakopoulos, Faycal Bensaali Abstract - This paper introduces an innovative architecture for deploying AI models in edge and cloud environments, leveraging Federated Learning and RISC-V processors for privacy and real-time inference. It addresses the constraints of edge devices like Raspberry Pi and Jetson Nano by training models locally and aggregating results in the cloud to mitigate overfitting and catastrophic forgetting. RISC-V processors enable high-speed inference at the edge. Applications include energy consumption monitoring with LSTM models and recommendations via collaborative filtering, and multi-robot human collaboration using CNN and YOLO models. Model compression and partitioning optimize performance on RISC-V, with experiments demonstrating scalability and responsiveness under varying computational demands.
Authors - Joyce Wong Ching Yan, Davy Tsz Kit Ng Abstract - This chapter investigates the effects of generative artificial intelligence and digital transformation on K-16 school leaders in the post-pandemic period. It describes the challenges those leaders face in the technology integration processes and recommends developing AI competencies, interdisciplinary curricula and relevant leadership skills. The shift to learning over the Internet as a consequence of the actions that were taken towards the prevention of the spread of covid 19 brought many positives but many education managers continue to face the challenges of educational technology integration. Such factors include the influence of wonderful ideas and the provision of working rooms that encourage teacher collaboration. The chapter proposes plans for creating professional development furthering teachers’ modernization of their digital knowledge. It also addresses the aspect of the efficient digital design that helps implement contemporary curricular programs. As soon as they pay attention to these most important items, their schools will be more able to meet the needs of the digital world market, and consequently enhance the students’ performance. This chapter adds further discussion of initiatives and projects on school leadership and technology communication integration. It describes specific techniques relevant to today's education and the issues that they face as the environment changes rapidly.
Authors - Mohammad Hamzehloui, Ardavan Ashabi Abstract - Microservices have emerged as a preferred architectural style for developing scalable and resilient applications, especially within cloud environments. This approach offers significant advantages over traditional monolithic architectures, such as enhanced scalability, flexibility, and fault isolation. However, these benefits come with substantial operational costs. Running microservices on cloud platforms incurs high expenses due to the need for extensive monitoring, complex service management, and dynamic resource allocation. Industry solutions have primarily focused on monitoring and management, leaving a gap in comprehensive strategies for cost reduction through optimization and resource management. This study aims to identify and analyze the primary cost drivers of running microservices and assess their individual impacts. By providing a detailed analysis, this research enhances the understanding of cost factors, aiding in the cost management and optimization of cloud-based microservices. This knowledge helps businesses make informed decisions to minimize expenses while maximizing the benefits of cloud adoption. Key cost drivers identified include virtualization mechanisms, scaling solutions, microservice architectures, API designs. Microservices vary significantly in terms of performance and resource consumption depending on their design and architecture. However, by following certain best practices, it is possible to reduce the overall running costs of microservices by minimizing resource consumption.
Authors - Rachel Roux, Sonia Yassa, Olivier Romain Abstract - Cloud computing is widely used to collect data from various devices, which must be processed quickly. To manage this growing data, fog computing helps reduce delay and processing costs by assigning tasks to suitable devices. This article presents an adapted binary monarch butterfly algorithm for task offloading in a fog-cloud environment. This metaheuristic directly constructs a Pareto front, offering a solution space representation. Two versions are examined: one using random search and the other a deterministic search with crowding distance. Simulations on tasks from 40 to 500 show that the binary Monarch Butterfly algorithm can outperform state-of-the-art algorithms for cost optimization while balancing delay.
Authors - Leo Thomas Ramos, Angel D. Sappa Abstract - This work explores the integration of a Channel Attention (CA) module into the ConvNeXt architecture to improve performance in scene classification tasks. Using the UC Merced dataset, experiments were conducted with two data splits: 50% and 20% for training. Models were trained for up to 20 epochs, limiting the training process to assess which models could extract the most relevant features efficiently under constrained conditions. The ConvNeXt architecture was modified by incorporating a Squeeze-and-Excitation block, aiming to enhance the importance of each feature channel. ConvNeXt models with CA showed strong results, achieving the highest performance in the experiments conducted. ConvNeXt large with CA reached 90% accuracy and 89.75% F1-score with 50% of the training data, while ConvNeXt base with CA achieved 77.14% accuracy and 75.23% F1-score when trained with only 20% of the data. These models consistently outperformed their standard counterparts, as well as other architectures like ResNet and Swin Transformer, achieving improvements of up to 9.60% in accuracy, highlighting the effectiveness of CA in boosting performance, particularly in scenarios with limited data.
Authors - Hamada R. H. Al-Absi, Devi G. Kurup, Amina Daoud, Jens Schneider, Wajdi Zaghouani, Saeed Mohd H. M. Al Marri, Younss Ait Mou Abstract - This study integrates traditional Science of Hadith literature —documenting the sayings, actions, and approvals of the Prophet Muhammad (PBUH) with modern digital tools to analyze the geographic and temporal data of Hadith narrators. Using the Kaggle Hadith Narrators dataset, we apply Kernel Density Estimation (KDE) to map the spatial distribution of narrators’ birthplaces, places of stay, and death locations across generations, revealing key geographical hubs of Hadith transmission, such as Medina, Baghdad, and Nishapur. By examining narrators’ timelines and locations, we illustrate movement patterns and meeting points over time, providing insights into the spread of Hadith across the Islamic world during early Islamic history. To our knowledge, this research is the first systematic attempt to analyze Hadith transmission using geo-spatial and temporal methods, offering a novel perspective on the geographic and intellectual dynamics of early Islamic scholarship.
Authors - Francisco Seipel-Soubrier, Jonathan Cyriax Brast, Eicke Godehardt, Jorg Schafer Abstract - We propose an architecture of a proof-of-concept for automated video summarization and evaluate its performance, addressing the challenges posed by the increasing prevalence of video content. The research focuses on creating a multi-modal approach that integrates audio and visual analysis techniques to generate comprehensive video descriptions. Evaluation of the system across various video genres revealed that while video-based large language models show improvements over image-only models, they still struggle to capture nuanced visual narratives, resulting in generalized output for videos without a strong speech based narrative. The multi-modal approach demonstrated the ability to generate useful short summaries for most video types, but especially in speech-heavy videos offers minimal advantages over speech-only processing. The generation of textual alternatives and descriptive transcripts showed promise. While primarily stable for speech-heavy videos, future investigation into refinement techniques and potential advancements in video-based large language models holds promise for improved performance in the future.
Authors - A.G.H.R. Godage, H.R.O.E. Dayaratna Abstract - This study explores the implementation of edge computing for semi-automated vehicle systems in urban environments, leveraging modern wireless technologies such as 5G for efficient data transmission and processing. The proposed framework integrates a vehicle-mounted camera, an edge server, and deep learning models to identify critical objects, such as pedestrians and traffic signals, and predict vehicle speeds for the subsequent 30 seconds. By offloading computationally intensive tasks to an edge server, the system reduces the vehicle’s processing load and energy consumption, while embedded offline models ensure operational continuity during network disruptions. The research focuses on optimizing image compression techniques to balance bandwidth usage, transmission speed, and prediction accuracy. Comprehensive experiments were conducted using the Zenseact Open Dataset, a new dataset published in 2023, which has not yet been widely utilized in the domain of semi-automated vehicle systems, particularly for tasks such as predictive speed modeling. The study evaluates key metrics, including bandwidth requirements, round-trip time (RTT), and the accuracy of various machine learning and neural network models. The results demonstrate that selective image compression significantly reduces transmission times and overall RTT without compromising prediction quality, enabling faster and more reliable vehicle responses. This work contributes to the development of scalable, energy-efficient solutions for urban public transport systems. It highlights the potential of integrating edge AI frameworks to enhance driving safety and efficiency while addressing critical challenges such as data transmission constraints, model latency, and resource optimization. Future directions include extending the framework to incorporate multi modality, broader datasets, and advanced communication protocols for improved scalability and robustness.
Authors - Haesol Kim, Eunjae Kim, Sou Hyun Jang, Eun Kyoung Shin Abstract - This study empirically examined historical trajectories of the semantic landscape of legal conflicts over medical decision making. We unveiled the lexical structures of lawsuit verdicts, tracing how the core concepts of shared decision making (SDM)-duty of care, duty to explain, self-determination-have developed and been contextualized in legal discourses. We retrieved publicly available court verdicts using the search keyword ‘patient’ and screened them for relevance to doctor-patient communications. The final corpus comprised 251 South Korean verdicts issued between 1974 and 2023. We analyzed the verdicts using neural topic modeling and semantic network analysis. Our study showed that topic diversity has expanded over time, indicating increased complexity of semantic structures regarding medical decision-making conflicts. We also found two dominant topics: disputes over healthcare providers’ liability and disputes over the compensation for medical malpractice. The results of semantic network analysis showed that the rhetorics of patients’ right to medical self-determination are not closely tied to the professional responsibility to explain and care. The decoupled semantic relationships of patients’ right and health professionals’ duties revealed the barriers of SDM implementations.
Authors - Given Sichilima, Jackson Phiri Abstract - the health and productivity of poultry farms are significantly impacted by the timely detection of diseases within chicken houses. Manual disease monitoring in poultry is laborious and prone to errors, underscoring the need for sustainable, efficient, reliable, and cost-effective farming practices. The adoption of advanced technologies, such as artificial intelligence (AI), is essential to address this need. Smart farming solutions, particularly machine learning, have proven to be effective predictive analytical tools for large volumes of data, finding applications in various domains including medicine, finance, and sports, and now increasingly in agriculture. Poultry diseases, including coccidiosis, can significantly impact chicken productivity if not identified early. Machine learning and deep learning algorithms can facilitate earlier detection of these diseases. This study introduces a framework that employs a Convolutional Neural Network (CNN) to classify poultry diseases by examining fecal images to distinguish between healthy and unhealthy samples. Unhealthy fecal images may indicate the presence of disease. An image classification dataset was utilized to train the model, which achieved an accuracy of 84.99% on the training set and 90.05% on the testing set. The evaluation indicated that this model was the most effective for classifying chicken diseases. This research underscores the benefits of automated disease detection within smart farming practices in Zambia.
Authors - Ruqaya Majeed Kareem, Mohammed Kh. Al-Nussairi Abstract - Since the establishment of microgrids, the frequency stability and reliability in operating the voltage of microgrids have become necessary due to local sources of reactive power. Droop control technology has been successfully applied to this problem and remains popular today. This study proposes a control strategy that can be utilized to power-sharing and adjust the voltage and frequency appropriately according to the load condition. The main aim of the research is to control the frequency and voltage of microgrids under various conditions by using two algorithms, the Gray Wolf Optimizer (GWO) and Kepler Optimization Algorithm (KOA) to optimize the droop control and optimize the PI controller parameters. Simulation findings using Simulink in MATLAB demonstrate the performance of the suggested microgrid stability techniques. Finally, to evaluate the efficiency of the suggested control strategy, its results are compared with conventional methods.
Authors - Ahmed Abu-Khadrah, Munirah Ali ALMutairi, Mohammad R. Hassan, Ali Mohd Ali Abstract - The Internet of Things (IoT) devices are employed in various industries, including health care, smart homes, smart grids, and smart cities. Researchers address the intricate connection between the growth of the Internet of Things and the hazards to its security. The vast and varied features of the Internet of Things make traditional security solutions ineffective. A new model is developed to enhance IoT malware detection by combining three machine learning algorithms: KNN, Bagging, and support vector machines. The proposed model is evaluated by measuring accuracy, precision, recall and F1-score. In addition, two comprehensive datasets are utilized to evaluate the proposed model dataset. The study explores the potential of three ensemble classification models for Malware Detection. This study investigated the efficacy of a novel ensemble machine-learning approach for detecting malware within the Internet of Things (IoT) domain. The result of this research is that the accuracy on the validation set is 95.76%, the precision on the validation set is 97.01%, the recall is 94.55%, and the F1 score is 95.77%. The findings of this study indicate that the proposed model, a synergistic combination of K-Nearest Neighbours (KNN), Bagging, and Support Vector Machines (SVM), achieved a commendable overall accuracy of 95.76% in correctly classifying both malware and benign programs within the utilized IoT dataset.
Authors - Hoang Minh Tuan, Ngo Gia Quoc, Nguyen Huu Tien, Vu Thu Diep Abstract - This paper provides a method for automatically detecting fake beef by image analysis. High-quality classification models could have a major impact on ensuring food quality, supporting supply chain management in the meat industry, and preventing fraudulent commercial practices. Because low-quality meat is cheaper and more widely available than beef, it is common to use them as a substitute for fake beef. The problem is due to the differences in meat appearance, texture, mutilation, and color of cuts, as well as similarities between real beef and fake beef. These characteristics require a robust method to distinguish subtle characteristics to obtain reliable results. This paper combines the strength of Convolutional Neural Networks to detect a true classification of beef and fake beef. This model targets mobile applications and is suitable for the practical deployment of various environments.
Authors - George Kwamina Aggrey, Amevi Acakpovi, Emmanuel Peters Abstract - ERP systems are integrated information systems (IIS) popularly used among tertiary institutions in the globe. ERP has attained familiarization in certain parts of the globe due to its huge acquisition in tertiary institutions. Notwithstanding the rising acquisition, choice and execution of ERPs in higher education, there remains a scarcity in literature about their performance especially in the developing world. It is, therefore, important to further examine whether these ERPs fulfill their anticipated benefits. This paper aims to evaluate the effectiveness of KNUST's enterprise system (comprising ARMIS, Panacea, and Synergy Systems) in HEIs through a system integrative framework. A combined-method research approach was employed, collecting data from a sample of 60 respondents for both quantitative and qualitative investigation. The data were examined through partial_least_squares structural_equation_modeling (PLSSEM) and inductive_thematic_analysis. The study's results revealed that the customer/stakeholder-perspective, learning-growth-perspective, financial-perspective and system-quality-perspective significantly influence and positively relate to the effectiveness of KNUST's enterprise system evaluation in Ghanaian higher education. Internal business process, according to the findings, was the only perspective that had no significant impact on the performance of KNUST enterprise system in the Ghanaian higher education. Works on ERPs assessments, readiness, and implementations are scarce in developing world, particularly in the Ghanaian context. This study has successfully assessed the KNUST enterprise system, demonstrating its effectiveness through the research model deployed.
Authors - Jimmy Katambo, Gloria Iyawa, Lars Ribbe, Victor Kongo Abstract - The vulnerability of Southern Africa to climate variability, especially drought, places substantial pressure on agriculture, water systems, and the economy. This study explores how El Niño-Southern Oscillation (ENSO)-related Sea Surface Temperature (SST) variations influence drought patterns across the region using machine learning methods. Two approaches were taken: (i) a feature ranking of SST in comparison to twelve other climate variables and (ii) drought model performance comparisons with and without SST data. Results reveal SST’s significant and consistent impact across all climate zones, with both methods indicating that SST data, particularly in connection with ENSO phases, strongly influences drought variability, despite slight variations in its order of effect with respect to climatic zonal divisions. This underscores the value of incorporating SST in climate models for enhanced drought prediction and adaptation planning. Although limited by a focus on SST and not fully accounting for interactions with other climate factors, this research provides a solid foundation for understanding regional climate dynamics. Adding more climate indicators and studying SST’s interactions with land-based factors could help future studies make drought predictions more reliable and better prepare vulnerable areas.
Authors - Ping Luo, Kyle Gauthier, Bo Huang, Wenjun Lin Abstract - Data visualization is a critical tool for interpreting complex information, yet it often remains inaccessible to those without extensive technical and analytical skills. This study introduces a novel multi-agent system leveraging a large language model (LLM) to democratize the process of creating high-quality visualizations. By automating the stages of planning, coding, and interpretation, the system empowers users with diverse backgrounds to generate accurate and meaningful visual representations of data. Our approach employs multiple specialized agents, each focusing on different aspects of the visualization workflow, thereby enhancing the overall quality through collaborative problem-solving and contextual communication. The iterative refinement phase ensures that the visualizations meet the initial objectives and data characteristics, thus improving accuracy and relevance. This study’s modular design allows for scalability and adaptability to various data types and visualization needs, ensuring the system remains current with emerging tools and frameworks. By lowering the barriers to effective data visualization, our system supports broader data-driven decision-making across various domains, fostering more inclusive and impactful data analysis practices. Validation on two public datasets demonstrates that our multi-agent framework generates visualizations that achieve comparable or superior quality metrics when benchmarked against human expert analysis.
Authors - Nelly Galora-Moya, Paola Ramos-Medina, Elsa Hernandez-Cherrez, Javier Sánchez-Guerrero Abstract - This study analyzes the necessity and impact of English for Specific Purposes (ESP) courses on B1+ level students, integrating gamification as an innovative approach through the use of the Duolingo platform. Surveys were ad-ministered to students and faculty members from various departments to gather their perceptions on the relevance and feasibility of gamified ESP courses. Additionally, a preliminary diagnostic test was conducted to assess technical vocabulary knowledge in a gamified environment. The results show a general consensus on the importance of ESP courses, highlighting Duolingo as an effective tool for personalizing learning and enhancing linguistic competencies in specific contexts. Significant gaps in students' linguistic skills were also identified, justifying the incorporation of this methodology. The paper proposes a collaborative program between the faculties of the Technical University of Ambato (UTA) and the Language Center, with Duolingo playing a central role in designing a gamified curriculum to bridge the gap between academic English and the specialized skills required in professional settings. This approach not only improves academic performance but also equips graduates with the linguistic skills necessary to compete in the global job market. Gamification, by combining motivation and interactivity, fosters autonomous and collaborative learning. This work contributes to the current debate on the use of technology in ESP teaching, emphasizing gamification as a key strategy for personalizing learning. Future research should assess the longitudinal impact of gamified platforms like Duo-lingo on academic performance and graduates’ career trajectories.
Authors - Reem M. Zemam, Nahla A. Belal, Aliaa Youssif Abstract - According to the World Health Organization’s statistical data for 2024, breast cancer is the most often diagnosed cancer among women.Between 2020 and 2024, approximately 37,030 new instances of invasive breast cancer were documented in women.Recent advancements in deep learning have shown considerable potential to improve the accuracy of breast cancer diagnosis, ultimately aiding radiologists and clinicians in making more precise decisions.This study presents a strategy that creates a highly dependable ultrasound analysis reading system by comparing the powerful processing capabilties of CNNs with 4 pretrained models (Transfer Leraners). The models employed were the DenseNet 169, ResNet 152, MobileNet V2, and Xception. To assess the effectiveness of the proposed framework, experiments were conducted using established bench- mark datasets (BUSI datasets). The suggested framework has demonstrated superior performance compared to previous deep learning architectures in precisely identifying and categorizing breast cancers in ultrasound images. Upon comparison of the specified deep learning models, DenseNet 169 had the maximum performance with an accuracy of 99.7%. This surpasses the research undertaken in the literature. This research employs advanced deep learning algorithms to enhance breast cancer diagnostic outcomes, decreasing diagnosis time and facilitating early treatment.
Authors - Eduardo Pinos-Velez, Adriana Martinez-Munoz, Dennys Baez-Sanchez Abstract - Prolonged sitting is a significant contributor to various health issues, including pressure ulcers, lower back pain, and circulatory disorders. This paper provides an analysis of these pathologies, examining their underlying causes, physiological impacts, and the compounding role of risk factors such as physical inactivity and poor posture. Furthermore, the study evaluates technological solutions designed to mitigate these risks. These include advanced sensor-integrated cushions and alternating pressure systems that facilitate weight redistribution to prevent tissue damage.
Authors - Desmond Kwadjo Kumi, Sheena Lovia Boateng Abstract - This study uses the utility theory framework to investigate cryptocurrency awareness and usage behavior among Gen Z in Ghana. Data were collected from 700 individuals in the educational sector of Ghana, aged 18-25 years, through purposive and snowball sampling, using structured questionnaires, with 657 usable responses. Data was analyzed using SPSS, AMOS, and Hayes Process Macro. The results revealed that perceived benefits significantly indirectly affect cryptocurrency usage behaviour via cryptocurrency’s perceived value. Perceived risks lessen the influence of perceived benefits on perceived value, whereas personal innovativeness improves this link. The survey further revealed a very high awareness of Bitcoin and other cryptocurrencies, but comparatively lower awareness of the entire blockchain technology. Whereas, awareness, attitudes toward and ownership of cryptocurrencies were higher among males than females, thus showing a gender gap in the awareness and ownership of digital assets. This study is arguably one of the few sources with insights into applying the utility theory to understand cryptocurrency usage behaviour among Gen Z in a developing economy like Ghana. Practitioners and policymakers could therefore, tailor strategies that address awareness and ownership gaps and optimize utility dimensions.
Authors - Varsha Naik, Rajeswari K, Kshitij Jadhav, Aniket Rahalkar Abstract - This study examines cross-lingual natural language processing (NLP) techniques to address the challenges of developing conversational AI systems for low-resource languages. These languages often lack extensive linguistic re- sources such as large-scale corpora, annotated datasets, and language-specific tools, making it difficult to capture the linguistic distinctions and contextual meaning essential for high-quality dialogue systems. This language gap restricts accessibility and inclusivity, preventing speakers of these underrepresented languages from fully benefiting from advancements in technology. The study compares various factors that affect model performance, including transformer model architecture, cross-lingual embeddings, fine-tuning strategies, and transfer learning approaches. Despite these challenges, the research shows that cross-lingual models offer promising solutions, especially when utilizing techniques like transfer learning and multilingual pre-training. By transferring knowledge from high-resource languages, these models can compensate for the scarcity of data in low-resource languages, enabling the development of more accurate, culturally sensitive, and inclusive AI systems. The findings highlight the importance of bridging linguistic divides to foster greater language diversity, accessibility, and technological inclusivity, ultimately supporting cultural preservation and revitalization.
Authors - Elrasheed Ismail Mohommoud Zayid, Ahmad Mohammad Aldaleel, Omar Abdullah Omar Alshehri Abstract - Machine learning classifiers are the first candidate methodology that could be used to assess the digital innovation across a set of teachers. This study aims to collect, build, represent, and discuss a reliable digital innovation skills (DIS) dataset by recruiting teachers chosen from the teachers who work in Bisha Province, Saudi Arabia. The study processed a rich data sample and made it accessible and shareable for the researchers' open use. DIS assessment addressed the problems and helped design a suitable innovation training module for the local community teachers. The total dataset comprises 400 conveniently collected data points, and each data point represents a complete record of teachers among the DSTs of Bisha Province. The research fields are prepared and set as fifty questionnaire questions, which distributed across the DSTs community in the area using social networks. Each question represents a single input or output feature for the classification model. Before running the ML models, the input variables are encoded serially from F0 to F49, and based on an explanatory test performed using LazyPredictools, only the positively contributing features are used. The extensive dataset, which is kept in the Mendeley Data repository, has a great deal of possibilities for reuse in sensitivity analysis, policymaking, and additional study. The decision tree, extra tree, and extreme gradient boosting (XGB) classifiers are examples of the recruited algorithms for evaluating DISs. The authors believe that this a wealthy kind of innovative respiratory dataset with its classification features will become a valuable mining source for interested researchers.
Authors - Malgorzata Pankowska Abstract - Business information system (BIS) consultants are working on solving problems of client companies, providing them with high-quality services, helping them quickly respond to changes in their ecosystems, and to the changes initiated by new technologies. Client is usually the most important actor in the consulting process. Therefore, the consultants are to be well educated to ensure the best satisfying solutions. This study focuses on business information system analysts’ competences development to enable them participation in the consulting projects. In this study, the thematic review of literature was applied, the author’s framework of consultants’ competencies for business information system strategic analysis has been provided, and finally, the author formulate a recommendation on business analysis course for students of computer science at university. The findings indicate that both the students’ motivation, knowledge, experience, as well as a strong theoretical background and a methodological support from cooperative business units influence innovativeness and creativity of BIS consultants.
Authors - Martin Mayembe, Jackson Phiri Abstract - Religious organizations, particularly church organisations, play a significant role in the lives of many people globally. These organisations require efficient management of various operations such as management of members, finances, events, and communications to fulfil their mission effectively. Existing church management systems are often built using traditional monolithic architectures, which come with inherent challenges. These challenges include platform dependence, limited scalability, and high upfront investment, making it difficult for many church organizations to develop, maintain, and scale their systems effectively and efficiently. This method of development is often referred to as the Spaghetti model. This study explores the application of the Micro-services Architecture in Church Management Systems, with the use of a service bus to enable communication between the services, to achieve modularity and scalability. To demonstrate the effectiveness of this design, a prototype is developed, focusing on two key modules: the Church Member Management System and the Financial Management System These modules work in tandem to manage member and associated member contribution data to provide access to up-to-date vital information.
Authors - Williams A. Ayara, Adenike O. Boyo, Mustapha O. Adewusi, Razaq O. Kesinro, Mojisola R. Usikalu, Kehinde D. Oyeyemi Abstract - The search for enhancing green electricity generation and the constant increase in the price of crude oil and its products propelled the choice of this research. Hence, a photovoltaic fuel-less power generating system using locally available materials. The input and output characteristics are analyzed to determine the efficiency, and the power generated by the photovoltaic-powered fuel-less generator is used to power an external load. The photovoltaic used is oriented to face in a direction with optimum tilt for maximum yield (to face southward) of solar power. This orientation and angle of tilt were determined using the Garmin Oregon450 GPS in conjunction with a Seaward Solar Survey 200R meter. Thus, the photovoltaic fuel-less generator was successfully developed. The driving component of this power-generating system is the 1 HP Direct Current (DC) motor, powered by two (2) 250 W mono-crystalline solar panels via a 12 V battery connected to a 30 A charge controller to maintain the charge level of the battery which helps to spin the 650 W Alternating Current (AC) alternator to deliver electricity. The device efficiently delivered power by lighting three (3) incandescent bulbs and a standing fan with total power between 100 – 220 W, and an efficiency of 70 -75%. This generator is eco-friendly since it does not emit any contaminants to the environment.
Authors - Sarah Anis, Mohamed Mabrouk, Mostafa Aref Abstract - This research paper investigates the application of sentiment analysis in the fintech sector, focusing on stock market prediction through a transformer-based model, specifically FinBERT. By comparing its performance against established models like CNN, LSTM, and BERT across different datasets, the study demonstrates that FinBERT achieves superior accuracy in classifying sentiments from financial reviews. The findings emphasize the significance of specialized models tailored to specific domains for improving sentiment analysis within the financial sector, providing useful information for those involved in the fintech field.
Authors - Carlo Giuseppe Pirrone Abstract - This study explores urban accessibility for the elderly, focusing on the importance of integrating objective and subjective measures for a comprehensive assessment. Objective measures, such as cumulative opportunity measures (CUM) or measurable travel times, quantify spatial data while often neglecting personal experiences and user perceptions. Subjective measures, obtained through surveys, become crucial in defining the ease of access to services, including satisfaction levels derived from the journey, barriers due to individual factors (age, health, disability), as well as comfort and safety. A combined methodology would promote a new interpretation of urban accessibility. A case study conducted in Rende, Italy, illustrates a practical application by mapping healthcare services and public transport to assess pedestrian accessibility. A pilot survey gathered elderly residents' perceptions of distance, travel time, and service satisfaction. Preliminary results indicate a reluctance to walk, overestimated perceived distances and a strong reliance on private vehicles, highlighting the need for infrastructures and services to provide a better connection of the elderly to healthcare services. Ongoing research will further refine the study by adapting objective measures to local perceptions to develop a specific accessibility indicator for the area.
Authors - Dilshan De Silva, Dulaj Dawlagala Abstract - The research demonstrates a reliable and efficient IoT solution aimed good at improving the safety of campers and hikers in outdoor settings. It consists of one primary device and a set of other peripherals, which employ LoRa communication, GPS positioning, and environmental parameters for location-based services. The primary device system consists of an ESP32 microcontroller fitted with a LoRa 433 MHz module and NEO-7MGPS powered using SYN-ACK protocol that allows the device to constantly communicate with the subordinate devices. The subordinate devices which are also based on ESP32 has LoRa modules and OLED screen for the purposes of receiving and providing information about the geofences and locational alerts. A significant strength of this system is that it is able to function in remote areas by bringing all the devices together into a mesh network so that data and devices can be synchronized without relying on the internet for connection purposes. Both primary and subordinate devices have the ability to connect to the internet wherever possible, update messages, synchronize, and transfer messages efficiently. Failure is further minimized for effective communication and precise positioning which are important in the management of outdoor safety hazards. The first prototype tests have shown the ability of the system to solve problems such as real-time interaction, data integration, and functioning in difficult environments. To npm and deploy the research further developed IoT systems for outdoor applications, effective outdoor deployment strategies were developed. As the next step, the system will be tested on a larger scale to assess its scalability, and user-centered interfaces will be redesigned to accommodate real-world scenarios.
Authors - Jude Osakwe, Sinte Mutelo, Nelson Osakwe Abstract - This study aimed to investigate the interrelationship between data governance and compliance with privacy regulations. A systematic literature review was conducted to synthesise the existing research on data governance's impact on compliance with privacy regulations. The study found that data governance has a positive association with compliance, with integrated data governance methods and processes supporting decision-making, and stakeholders' involvement guaranteeing transparent processes. The findings also suggested that the impact of data governance on privacy regulations compliance needs a certain maturity level and top management support. Key recommendations for organisations are outlined to enhance their governance frameworks, promote transparency, and align resources effectively in order to bolster compliance with privacy regulations. The study concludes by addressing identified research gaps and offering directions for future studies aimed at exploring the evolving landscape of data governance and privacy compliance.
Authors - Nabeela Kausar, Ramiza Ashraf, Saeed Nawaz Khan Abstract - Skin cutaneous melanoma is one of the most aggressive forms of skin cancer, with various prognostic factors that can significantly impact patients’ survival outcomes. Survival analysis helps in identifying key factors influencing patient outcomes and guides in clinical decision-making. In literature, statistical methods have been used for the survival analysis of skin cancer patients but these methods have limitations. To address the limitations of traditional statistical methods in survival analysis, researchers have developed a range of machine learning (ML) based survival analysis techniques. These ML techniques offer advanced capabilities for modeling complex relationships and improving prediction accuracy. But "black box" nature of ML models poses a challenge, especially in fields like healthcare where understanding the rationale behind predictions is crucial. It this work, Explainable AI (XAI) based survival analysis has been carried out using XGboost model and clinical features of skin cutaneous melanoma patients. XAI models explain their prediction by showing the important features involved in the prediction to demonstrate their reliability to be used by the clinicians. To validate the performance of XAI model, in this work, multivariate regression based Cox Proportional Hazard (CPH) model has been developed which shows the relationship of patients’ clinical features and survival time. The pro-posed XAI based model has C-index value of 84.3% and shows that age, pathology T stage, and pathology N stage are key factors influencing the survival of skin cutaneous melanoma patients. The CPH model further validates the strong association between these features and patient survival.
Authors - Mulima Chibuye, Jackson Phiri Abstract - Agricultural systems have been modeled and prediction of yields used in the space since the beginning of agriculture ,improvements in the crop science and better tools made the task much easier through the ages, from using the position of the sun to determining that certain weeds signify a good harvest to actually determining what factors precede observable phenomena, the space has be-come so advanced such that we are able to build better prediction models and the promise of quantum computation that can model much more complex systems and interactions among the individual parameters within the system promise to make us predict yields of crops with much better accuracy than has ever been deemed feasible. With the technology that we have available now, we can apply properties of physical systems on classical computers such as mimicking chaos theory to add randomness to our predictions as that is the way nature works. That randomness is due to how initial conditions might potentially fluctuate and we would normally call it random because we are missing certain parameters that if we collect, would greatly improve how we predict physical chaotic systems. The aim of this work is to explore how we can incorporate chaos in agricultural systems by making use of a hybrid approach to known systems like dense neural networks and more recent methods such as Echo State Networks.
Authors - Cynthia Sherin B, Poovammal E Abstract - Vehicle detection and classification has a key role in evolution of intelligent transportation system. Accuracy in the detection enhances the efficiency of intelligent traffic monitoring systems. The paper shows a comparative study on the performance of YOLOv8 and YOLOv10 detection models on the novel Vehicle Identification aNd Classification (VINC) dataset introduced. They detect multi-class vehicles such as cars, trucks, buses, bicycles and bikes. The achievements of each model are assessed using precision, recall, F1 score and confusion matrices. The experimental results demonstrates the supremacy of YOLOv10 in detecting very small and more complex vehicle structures in the traffic scenario than YOLOv8. Alternatively, YOLOv8 also exhibited equivalent detection accuracy in detecting large vehicles like buses and trucks, by capturing the minute variations in the processed features. The detection models achieve precision of 97.2% and 93.6% for YOLOv10 and YOLOv8 respectively. YOLOv10 achieves high recall rate and F1 score of 92.4% and 81.4% respectively. Thus, the detection performance of these vehicles expresses the robust characteristics of both the YOLO versions. This research paper delineates the merits and drawbacks of these two versions on real-time circumstances, thereby creating faster and precise detection models.
Authors - Mariam Basim Al-Najjar, Khalil H. Sayidmarie Abstract - This contribution investigates a proposed Frequency Selective Surfaces (FSS) that can be reconfigured using light-dependent resistor LDR. The unit cell of the FSS comprises a split square ring equipped with a single LDR placed at its gap. The FSS is built on the FR4 substrate of 40X40 mm dimensions, and ring size of 29 x29 mm to serve the WLAN application of 2.45 GHz frequency. When the LDR is adequately illuminated it exhibits a small resistance, and the ring behaves as a closed one, while in the dark condition, the resistance is high and the ring acts as a split ring. Therefore, the FSS works as a bandpass filter when illuminated, and as a bandstop filter without illumination. The LDR doesn’t need biasing wires that usually interfere with the structure of the FSS.
Authors - Moubaric Kabore, Abdoulaye Sere, Vini Yves Bernadin Loyara Abstract - This paper deals with Bayesian approach in Data Research in GIS database through artificial Intelligence (AI) modules, reading the best bayesian probability before returning the data requested, denoted AI4DB. The proposed method combines meshing techniques and the map-reduce algorithm with Bayesian approach to obtain a smart GIS database to reduce the execution time. According to the values of the Bayesian probability, the nearest sites of any position resulting of the user requests, are extracted speedily from the database using the map reduce framework. The execution time is less than the time for the case of the classical method, based only on a parallelism search without a probability. Only a map function with the best bayesian probability for the data in entry, executes entirely its instruction.
Authors - Denis Vasiliev, Rodney Stevens, Lennart Bornmalm Abstract - The idea of smart cities is becoming increasingly popular. Technology alone, however, is not sufficient for addressing the challenges that modern cities face. Pressures stemming from pollution, stress, and changing environmental conditions can ruin the lives of city dwellers, putting enormous pressure on public finance to address or mitigate consequences of the issues. Application of Nature-based Solutions could address multiple societal and environmental issues common to modern cities. Furthermore, the solutions can immensely benefit from integration with modern technologies. In fact, modern technologies can enhance the implementation, monitoring, and scalability of Nature-based Solutions. This makes a strong case for the application of Nature-based Solutions in modern urban environments that aspire to promote technology and become smart cities. Integration of the solutions into smart cities, however, is not a trivial task and requires a holistic approach to city planning and deep understanding of the ways to do so. Justification of the associated costs requires thorough understanding of the benefits, including the values that may be easily overlooked. Thus, in this study, we apply a conceptual research approach to explore how Nature-based Solutions can be integrated into Smart Cities, what are key benefits of the approach, and how this integration can address significant challenges in urban environments.
Authors - Victor Sineglazov, Andrew Sheruda Abstract - In this paper, a new hybrid segmentation method based on SSL (semi-supervised learning) was developed for samples with image sequences, not all of which were labeled. Thus, this method can find application in areas where labeling is expensive or requires a certain specialist, such as in medicine. The developed method was evaluated on a sample of echocardiography images of patients with infective endocarditis in the context of a real-world task of segmenting heart valve anomalies. As a result, the accuracy gain compared to supervised learning is 5% in the IOU metric, while with other SSL methods it is on average 3%.
Authors - Ervin Mikhail G. Garcia, Zara Naomi S. Inocencio, Ronnel Christian B. Langit, Paul James L. Perez, Harold Russell P. Visperas, Mary Jane C. Samonte Abstract - SHair is an innovative and very useful web-based tool that is made to make the process of donating hair easier. Its primary objective is to aid cancer patients in regaining self-confidence. SHair empowers hair donation by providing a user-friendly site that is safe for anybody wishing to give their hair. The main goal of the website is to have a substantial impact on the well-being of cancer patients by facilitating the connection between persons who express the intention to contribute their hair and those who need facial hair transplants. Individuals must possess this quality to foster a comprehensive comprehension of one another and collectives must possess it to increase their resilience. People stress how easy hair renewal is because it can bring donors and patients together, which is good for patients' mental health. SHair also makes sure that the processing and distribution of given hair can happen, which helps with efforts that focus on the happiness and well-being of cancer patients.
Authors - Denis Vasiliev, Rodney Stevens, Lennart Bornmalm Abstract - Implementation of Nature-based Solutions is becoming increasingly widespread. The solutions are intended to simultaneously address environmental, social and economic challenges. This approach is intended to foster sustainable development. However, mere use of nature for addressing specific problems does not necessarily result in simultaneous delivery of value in all three sustainability areas. To make sure that Nature-based Solutions serve both nature and society and deliver maximal benefits, a holistic approach to their implementation is essential. Implementing this approach is, however, not a trivial goal. It requires joint consideration of environmental, social and economic factors at a range of spatial and temporal scales. Furthermore, collaboration among multiple diverse stakeholders in the context of rapidly changing systems is essential. This often involves processing large volumes of data and can be very labor intensive. As a result, the costs of such projects may be overly high, hindering their implementation. The emerging technology of Generative Artificial Intelligence can greatly facilitate the process, bringing down the costs and increasing speed and feasibility of the project implementation. However, lack of awareness and understanding of how such tools can be used in Nature-based Solutions projects may result in missing these opportunities. Thus, this paper explores potential applications of Generative Artificial Intelligence tools in pro-jects involving Nature-based Solutions.
Authors - Md Asif Ahmed, Md Sadatuzzaman Saagoto, Farhan Mahbub, Protik Barua Abstract - Graphene is emerging as a strong candidate for qubit applications in quantum computing due to its unique properties and recent technological advancements. Graphene, as a two-dimensional material with high carrier mobility and distinct electron behavior, presents potential advantages for qubit applications. However, its zero-band-gap nature poses challenges for stable quantum states, requiring innovative solutions to realize its full potential in quantum computing. This review explores graphene's unique properties and their impact on qubit design, analyzing recent breakthroughs aimed at overcoming its inherent limitations, such as techniques for band-gap modulation and substrate engineering. We delve into various methodologies, including the integration of hexagonal boron nitride (hBN) and electrostatic gating, to enhance graphene's performance for quantum applications. Additionally, we examine the integration of graphene with other 2D materials and hybrid structures to achieve tunable quantum properties, essential for advancing scalable quantum architectures. This comprehensive analysis aims to bridge the material science challenges with the practical demands of qubit technology, providing a roadmap for leveraging graphene in future quantum systems.
Authors - Kobus Kemp, Lynette Drevin, Magda Huisman Abstract - This paper reports on a study that explores and addresses security challenges in the development of enterprise mobile applications (EMAs). Despite the growing prevalence of mobile applications, security considerations are often overlooked or insufficiently addressed in mobile application software development methodologies. This gap highlights the need to incorporate security training into software developer education. The study used a literature review of software development methodologies (SDMs) and security practices, complemented by case studies involving interviews with industry experts on EMA development processes. Using thematic and cross-case analyses, the study produced a framework designed to guide the integration of security measures into EMA development. Findings revealed a limited emphasis on security aspects in current mobile application development practices. Consequently, a partial framework is presented in this paper, detailing key security considerations and countermeasures specific to EMA development. This research contributes to the discipline by offering developers guidelines to enhance security in EMAs, emphasizing the importance of integrating these practices into developer training programs.
Authors - Marisol Roldan-Palacios, Aurelio Lopez-Lopez Abstract - Limited available data becomes a critical problem in specific machine learning tasks, where approaches, such as large language models, turn impractical. Reaching solutions in such situations requires alternative methods, especially whether the object of study contributes to data scarcity while preventing using techniques such as data augmentation. This scenario led us to formulate the research question on how to squeeze hidden information from small data. In this work, we propose a data processing and evaluation technique to increase information extraction from scarce data. Attributes expressed as trajectories are further pair-related by proximity and assessed by customary learning algorithms. The efficacy of the proposed approach is tested and validated in language samples from individuals affected by a brain injury. Direct classification on raw and normalized data from three sets of lexical attributes works as a baseline. Here, we report two learning algorithms out of five explored, showing consistent behavior and demonstrating satisfactory discriminatory capabilities of the approach in most cases, with encouraging percentages of improvement in terms of f1-measure. We are in the process of testing the approach in language data sets of syntactic and fluency features, but other fields can take advantage of the technique.
Authors - Khalaf Elwadya, Khosro Salmani Abstract - The evolution of social media platforms has led to the creation of a dynamic ecosystem, abundant in user-generated content. This, however, has also resulted in raising concerns about data privacy. Beyond potential threats like scammers exploiting freely shared information on social media for spying, financial scams, social media companies can leverage user data to sell targeted advertising. Addressing these issues necessitates heightened user awareness. Hence, this paper first examines the privacy policies of major social media platforms including TikTok, Twitter, Facebook, Instagram, and LinkedIn, providing a comparative analysis of their data storage practices, utilization of user information, account verification requirements, and default privacy settings. Next, we undertake an extensive survey utilizing the data gathered in the initial phase to evaluate user awareness regarding the utilization of their data, highlighting a notable gap between policy stipulations and user expectations. We conclude with four recommendations based on our findings to help social media companies refine their privacy policies, promoting more comprehensible guidelines.
Authors - Venkata Sai Varsha, Prodduturi Bhargavi, Samah Senbel Abstract - This paper provides a thorough analysis of U.S. Congressional history from the 66th to the 118th Congress, examining demographic trends, political shifts, and party dynamics across decades. Using Python-based data processing, the study compiles and interprets historical data to identify patterns in representative demographics, party representation, and legislative impact. The analysis investigates generational changes within Congress, with particular focus on age distribution, tenure, and shifts in political party dominance. Visualizations and statistical insights generated through Python libraries, such as Pandas, Matplotlib, and Seaborn, reveal significant historical events and socio-political influences shaping Congress. By examining age-related trends, the study highlights a generational gap, with older members retaining significant representation and a younger cohort gradually emerging. Additionally, it explores the evolution of bipartisan dominance and third-party representation, offering insights into political diversity and the resilience of the two-party system. This research contributes to the understanding of how demographic and political transformations within Congress reflect broader societal trends and may influence future governance.
Authors - Taufique Hedayet, Anup Sen, Mahfuza Akter Jarin, Shohel Rana Shaon, Joybordhan Sarkar, Sadah Anjum Shanto Abstract - A price hike is an atypical increase in the cost of an essential item. A price rise is an unusual increase in the prices of everyday basic goods. The price increase has several factors. Everyday items are becoming more and more expensive. In this research, we have used Bidirectional Long Short-Term Memory (BLSTM), Long Short-Term Memory (LSTM), Adaboost, Support Vector Regression (SVR), Gradient Boosted Regression Tree (GBRT), and REST API for forecasting the prices for necessary commodities and we will evaluate efficiency by the value of gold. Our preeminent objective is to find a method that can detect and predict price hike that can be much more accurate and efficient than the other approaches that are currently available in the relevant literature. The acceptance of the detection and prediction is based on their accuracy and efficiency. Price hike predictions may role important for everyday life for many stakeholders, including firms, consumers, and government. The energetic and sporadic character of advertising estimating is highlighted as a major foreseeing.
Authors - Sathvik Putta, Tejagni Chichili, Samah Senbel Abstract - Traffic accidents remain a critical issue globally, with significant implications for public health, safety, and economic stability. This study provides a comprehensive analysis of traffic accident trends in the northeastern United States, focusing on Connecticut and its neighboring states—New York, New Jersey, New Hampshire, and Massachusetts. By leveraging a dataset encompassing fatal collisions, driver behaviors, and car insurance premiums, this work investigates correlations between risky driving habits, accident outcomes, and the associated financial impacts. Key metrics analyzed include speeding-related incidents, alcohol-impaired driving, distracted driving, and their influence on insurance costs and claims. rigorous data preprocessing methodology was employed, including normalization, outlier detection, and feature selection, ensuring a robust and reliable dataset for analysis. The study used advanced visualization techniques and statistical modeling, utilizing Python libraries like Pandas, Matplotlib, and Scikit-learn, to identify trends and derive actionable insights. Comparative analysis reveals that while neighboring states such as Massachusetts and New York excel in certain safety metrics, Connecticut lags in addressing critical behavioral risks like speeding and alcohol impairment.
Authors - Dawngliani M S, Thangkhanhau H, Lalhruaitluanga Abstract - Breast cancer continues to pose a major public health challenge world-wide, necessitating the development of accurate prediction algorithms to improve patient outcomes. This study aimed to devise a predictive model for breast cancer recurrence using machine learning techniques, with data sourced from the Mizoram State Cancer Institute. Utilizing the Weka machine learning toolkit, a hybrid approach incorporating classifiers such as K-Nearest Neighbors (KNN) and Random Forest was explored. Additionally, individual classifiers including J48, Naïve Bayes, Multilayer Perceptron, and SMO were employed to evaluate their predictive efficacy. Voting ensembles are utilized to augment performance accuracy. The hybridization of Random Forest and KNN classifiers, along with other base classifiers, demonstrated notable improvements in predictive performance across most classifiers. In particular, the combination of Random Forest with J48 yielded the highest performance accuracy at 82.807%. However, the J48 classifier alone achieved a superior accuracy rate of 84.2105%, signifying its efficacy in this context. Thus, drawing upon the analysis of the breast cancer dataset from the Mizoram State Cancer Institute, Aizawl, it was concluded that J48 exhibits the highest predictive accuracy compared to alternative classifiers.
Authors - Cansu Cigdem EKIN, Mehmet Afsin YUCE, Emrah EKMEN, Gokay GOK, Ibrahim UGUR Abstract - This study presents a preliminary assessment of the reliability and validity of a technology acceptance model UTAUT (Unified Theory of Acceptance and Use of Technology) for KeyDESK, a health facility management system used in healthcare settings. The model evaluates key constructs of the UTAUT model to better understand the contextual adoption of health facility management systems. Data were collected from 2547 respondents comprising system operators and healthcare professionals who utilize the KeyDESK platform for task and service management. Reliability was assessed through internal consistency measures, which confirmed strong alignment across constructs. Convergent validity was established by evaluating shared variance and item relevance, while the distinctiveness of constructs was verified through cross-comparative analyses. Preliminary results suggest that all constructs fulfill reliability and validity criteria, ensuring the robustness of the measurement model. These results provide an empirical foundation for understanding user acceptance of health facility management systems and highlight areas for further model refinement. This study serves as a critical step towards conducting more comprehensive structural equation modeling (SEM) analyses in subsequent research.
Authors - Kwang Sik Chung, Jihun Kang Abstract - As distance education services develop, much research is being conducted to analyze learners' learning activities and provide a customized learning environment optimized for each individual learner. The personalized learning environment is basically determined based on learner-centered learning analytics. However, learning analysis research on learning content, which is the subject of interaction with learners, is insufficient. In order to recommend learning content to learners and provide the most appropriate learning evaluation method, learner's learning capability and the difficulty of the learning content must be appropriately analyzed. In this research, the relative learning difficulty of the learning contents and the learner is analyzed, and through this, the learner-relative learning contents difficulty is analyzed. For this purpose, educational (learning) contents Data, Learning Operational Data, Learner Personal Learning data, Peer Learner Group Data, and Learner Statistical Data are collected, stored at learning records storage server and analyzed by the Learning Analytics System with several Deep Learning models. Finally, we find the absolute difficulty of the subject, the relative difficulty of the subject, the relative difficulty of the peer learner group, the relative learning capability of a learner, the absolute learning capability of a learner, the learning contents relative difficulty level for each learner, and the absolute difficulty of the subject for each individual learner, and personalized learning contents are created and decide with them.
Authors - Omar Hamid, Homaiza Saud Ahmad, Ahmed Albayah, Fatima Dakalbab, Manar Abu Talib, Qassim Nasir Abstract - The science of photogrammetry has been developing rapidly in recent years. With the rise of tools adopting this science and the advancement of computer vision technologies, the potential of such software is being acknowledged by researchers and integrated by market professionals into various fields. To cope with the rapid changes and expanding range of photogrammetry tools, a methodology was developed to identify the most widely adopted software tools, whether open-source or commercial, by the research community and market professionals. This resulted in the identification of 37 tools for which we developed a comprehensive review and presented our findings through visualizations such as pie charts and graphs. Furthermore, a comparison between the tools was carried out based on seven different attributes describing them, in order to assist professionals and individuals in picking software for specific use cases.
Authors - Mona Kherees, Karen Renaud, Dania Aljeaid Abstract - Smart Tourism is the most rapidly expanding economic sector, with data serving as the foundation of all Smart Tourism operations when travelers participate in various tailored travel services before, during, and after their journeys. The massive volume of data collected through various Smart Tourism Technologies raises tourists’ concerns. They might adopt privacy-preserving behaviors, like restricting sharing, fabricating data, or refusing to disclose requested information. Consequently, service providers manipulate users into disclosing personal data by employing persuasive marketing techniques based on Cialdini’s principles. This research aimed to investigate how the persuasion strategies of Cialdini employed by tourism organizations or service providers influence privacy concerns and users’ willingness to share personal information. A mixed-methods approach, incorporating expert reviews, was utilized to propose and validate a framework based on the Antecedents-Privacy Concerns-Outcome (APCO) model.
Authors - Otshepeng Lethabo Malebye, Tevin Moodley Abstract - This paper explores integrating Knowledge Management principles within Project Management frameworks to address critical challenges project teams face, such as diminishing individual experience and employees applying their knowledge to the projects in which they are a part. This paper identifies common problems encountered in knowledge sharing, such as tacit knowledge externalisation and documentation within project environments, by exploring the KM principles and their relevance to project success. A proposed solution is presented by looking at existing systems, such as DocuWare and frameworks, Knowledge Management, and Project Management. This paper introduces a framework to demonstrate the significance of employing systematic processes for identifying, capturing, sharing, and applying knowledge within project teams. It utilises techniques such as interviews, post-project reviews, communities of practice, and training. By using the integrated approach, the proposed solution aims to solve knowledge silos, facilitate tacit knowledge externalisation, and improve knowledge documentation.
Authors - Ahamed Nishath S, Murugeswari R Abstract - Researchers in the field of artificial intelligence are increasingly interested in exploring how to spot and counteract the spread of fake news. When compared to machine learning approaches, deep learning methods are superior in terms of their ability to reliably identify instances of false news. This study analyses the efficacy of various neural network topologies in the classification of news items into two distinct categories: false and real. This work takes into a hybrid model that merges both CNN and RNN layers incorporate with multi-channel mechanism, Which is the most complex model. When determining model’s overall performance, criteria such as accuracy, precision, and recall rates are taken into consideration. According to the findings, the hybrid model is able to efficiently attain a high degree of accuracy, particularly 99.16% of the target accuracy. The aforementioned results highlight the adaptability of various neural network designs in the context of distinguishing between real and false news, hence revealing key insights that have the potential to be implemented in practical scenarios involving the verification of information and the evaluation of its validity.
Authors - Massimo Carlini, Giuseppina Anatriello, Elisabetta Cicchiello Abstract - The modern business context and the amount of data available to companies and organizations has made decision-making processes even more com-plex and articulated. This pushes companies to provide a better product or service for customers, reasoning in terms of quality, flexibility and responsiveness to their requests and needs. In this context, the concepts of customer centricity and satisfaction are placed, or the need for companies to try to satisfy demand by offering efficient and quality treatment aimed at satisfying customer needs based on a deep and solid knowledge of them. This paper reports on the activities carried out by Anas S.p.A., by Customer Ser-vice, over the last few years, to improve the Digital Customer Experience, making available to customers the knowledge and experience acquired over the years. The objective, in terms of Customer Centricity, was to put the customer at the center of the offer, providing them with more modern, innovative, intelligent and efficient dialogue tools.
Authors - Aniko Vagner Abstract - NoSQL databases are grouped into many categories, one of them is the key-value databases. Our goal is to examine whether a system-independent key-value logical model exists. The idea came from the Redis database, which has the opaque key-value type named string, but it supports lists, hashes, sets, sorted sets, etc. If we compare them to the document databases storing JSON documents, they can have a system-independent logical model. We gathered databases said to fall into the key-value category and read their documentation considering the stored data structures. We found many subcategories under the key-value category. We found that the clean key-value databases with buckets can have a system-independent database model where the buckets collect the key-value pair, and the model is so easy. We could not identify a system independent logical model for the rest subcategories. Additionally, we recognised some viewpoints on which the data model of the key-value databases can be examined. Altogether, considering all subcategories we cannot speak about a system-independent logical data model for key value databases.
Authors - Timi Heino, Sampsa Rauti, Sammani Rajapaksha, Panu Puhtila Abstract - Today, web analytics services are widely used on modern websites. While their main selling point is to improve the user experience and return of investment, de facto it is to increase the profits of third-party service providers through the access to the harvested data. In this paper, we present the current state-of-the-art research on the use of web analytics tools, and what kind of privacy threats these applications pose for the website users. Our study was conducted as a literature review, where we focused on papers that described third-party analytics in detail and which discussed their relation to user privacy and the privacy challenges they pose. We focused specifically on papers dealing with the practical third-party analytics tools, such as Google Analytics or CrazyEgg. We review the application areas, purposes of use, and data items collected by web analytics tools, as well as privacy risks mentioned in the literature. Our results show that web analytics tools are used in ways which severely compromise user privacy in many areas. Practices such as collecting a wide variety of unnecessary data items, storing data for extended periods of time without a good reason and not informing users appropriately are common. In this study, we also give some recommendations to alleviate the situation.
Authors - Abigail Gonzalez-Arriagada, Ruben Lopez-Leiva, Connie Cofre-Morales, Eduardo Puraivan Abstract - The rapid advancement of information and communication technologies (ICT) has created a significant digital divide between older adults and younger generations. This divide affects the autonomy of older adults in a digitalized world. To address this issue, various initiatives have attempted to promote their digital skills, which requires reliable tools to measure them. However, assessing these competencies in this age group presents complex challenges, such as developing scales that accurately reflect the dimensions involved. In this study, we present empirical evidence on the reliability and adaptation of the Assessment of Computer-Related Skills (ACRS) scale. We translated the instrument into Spanish and added descriptors to optimize its application. The evaluation included 54 older adults in Chile (39 women and 15 men, aged 55 to 80) in an environment designed for individualized observation during the performance of specific digital tasks. The analyses revealed that the five dimensions of the instrument have high reliability, with Cronbach’s alpha values between 0.959 and 0.968. Six items were identified whose removal could slightly improve this indicator. Overall, the scale shows excellent internal consistency, with a G6 coefficient of 0.9994. These results confirm that, both at the level of each dimension and as a whole, the instrument demonstrates strong internal consistency, reinforcing its utility for assessing the intended competencies. An additional contribution of this work is the public availability of the data obtained, with the aim of encouraging future research in this area. Given the nature of the scale, which allows for the assessment of skills across various computer-related tasks, evidence of its high internal reliability constitutes a valuable resource for designing more inclusive educational programs specifically tailored to the needs of older adults in digital environments.
Authors - Svetlin Stefanov, Malinka Ivanova Abstract - The advent of new technologies leads to a complexity of the cyber-crime landscape and scenes, which requires an adequate response from digital forensic investigators. To support their forensic activities, a number of models and methodologies have been developed, such as the methodology Digital Forensics Investigation from Practical Point of View DFIP, proposed by us in a previous work. In addition, there is an urgent need for a virtual environment that would organize and manage the activities of investigators related to communication, document exchange, preparation of computer expertise, teamwork, information delivery and training. In this context, a software system implementing the DFIP methodology has been developed, and the aim of the paper is to present the results of a study regarding the opinion and attitudes of forensic experts on the usefulness and role of the software system during the different phases of digital forensic investigation.
Authors - Timi Heino, Robin Carlsson, Panu Puhtila, Sammani Rajapaksha, Henna Lohi, Sampsa Rauti Abstract - Electronics is one of the most popular product categories among consumers online. In this paper, we conduct a study on the thirdparty data leaks occurring in the websites of the most online electronics stores used by Finnish residents, as well as the amounts of third parties present at these websites. We studied the leaks by recording and analyzing the network traffic happening from the website while conducting actions at the website that the normal user does when purchasing the product. We also analyze dark patterns found in these websites’ cookie consent banners. Our results show that in 80% of the cases, the product name, product ID and price were leaked to third parties along with the data identifying the user. Almost all of the inspected websites used dark patterns in their cookie consent banners, and privacy policies often had severe deficiencies in informing the user of the extent of data collection.
Authors - Luis E. Quito-Calle, Maria E. Barros-Ponton, Dalila M. Gonzalez-Gonzalez, Luis F. Guerrero-Vasquez, Jessica V. Quito-Calle Abstract - The confinement of families, whether due to health emergencies or other quarantines, has caused lifestyle changes to cause changes in the behavior of population and cause stress among its members when facing confinement. Present study aimed to determine if there is an association between the lifestyles and parents’ coping with stress due to confinement due to the Health Emergency or quarantine due to COVID- 19. This study methodology was quantitative, descriptive, correlational and cross-sectional. Participants were made up of 75 representatives of Bilingual Educational Institute "Home and School" INEBHYE. Instruments used were Lifestyle Profile Questionnaire (PEPS-I, in Spanish) and Stress Coping Questionnaire (CAE, in Spanish) with which it was obtained as a result that a healthy lifestyle predominates because families have been facing their stress under problem solving, positive reassessment and religion in the face of confinement. As a conclusion, it is obtained that there is a statistically significant association between the subscales of coping with stress and families lifestyle, which would imply a change in lifestyle to face the stress caused by confinement due to COVID-19.
Authors - Vicente A. Pitogo, Cristopher C. Abalorio, Rolyn C. Daguil, Ryan O. Cuarez, Sandra T. Solis, Rex G. Parro Abstract - The agricultural resources in the Philippines are essential for national food security and economic development with coffee being at its center. Moreover, recent data released by the Philippine Statistics Authority (PSA) show an increase in coffee production although there has been a worrying decline in pro-duction in Caraga region which grows over two thousand five hundred growers and has a huge area of land planted to coffee. The FarmVista project addressed this challenge through a data-driven approach by applying Principal Component Analysis (PCA) and various machine learning algorithms to classify and analyze coffee yield in Caraga. The study utilized a comprehensive dataset, the Coffee Farmers Enumerated Data, encompassing socio-demographic details, farming practices, and other influential factors. Gradient Boosting achieved the highest accuracy of 98.69%, with Random Forest closely following at 95.63%. These results highlight the effectiveness of advanced analytics and machine learning in improving coffee yield classification. By uncovering key patterns and factors affecting yield quality, this study provides valuable insights to optimize the coffee value chain in Caraga and addresses the region’s production challenges.
Authors - Vishnu Kumar Abstract - Cold chain logistics is the process of maintaining a controlled temperature throughout the storage and transportation of temperature-sensitive products. Ensuring the integrity of the cold chain is critical for the safety and efficacy of pharmaceutical (pharma) products. In the modern supply chain land-scape, the pharma industry involves many stakeholders, including Small and Medium-sized Enterprises (SMEs), which handle logistics, storage and retail operations. Despite the availability of advanced temperature monitoring technologies, SMEs face significant challenges in adopting these solutions due to economic constraints, limited technological resources, and lack of expertise. To bridge this gap, this work proposes a novel, cost-effective Internet of Things (IoT) based framework for real-time temperature monitoring in the cold chain of pharma products. Using a Raspberry Pi and Sense HAT module, coupled with a smartphone application, this system enables SMEs to implement an affordable and reliable cold chain monitoring solution. The capabilities of the proposed framework are demonstrated through a temperature monitoring case study, simulating the conditions faced in pharma supply chains. This work is expected to provide a practical resource for SMEs and suppliers seeking to im-prove their cold chain management without incurring excessive costs.
Authors - Simona Filipova-Petrakieva, Petar Matov, Milena Lazarova, Ina Taralova, Jean Jacques Loiseau Abstract - Plant disease detection plays a key role in modern agriculture, with significant implications for yield management and crop quality. This paper is a continuation of previous research by the authors' team related to the detection of pathologies on apple tree leaves. In order to eliminate the problem of overfitting in the traditional convolutional neural networks (CNNs) transfer learning layers are added to a residual neural network architecture ResNet50. The suggested model is based on pre-trained CNN whose weight coefficients are adapted until ResNet obtains the final classification. The model implementation uses Tensor- Flow and Keras frameworks and is developed in Jupyter Notebook environment. In addition, ImageDataGenerator is utilized for data augmentation and preprocessing to increase the classification accuracy of the proposed model. The model is trained using a dataset of 1821 high-resolution apple leaves images divided into four distinct classes: healthy, multiple diseases, rust, and scab. The experimental results demonstrate the effectiveness of the suggested ResNet architecture that outperforms other state-of-the art deep learning architectures in eliminating the overfitting problem. Identifying different apple leaves pathologies with the proposed model contributes to developing smart agricultural practices.
Authors - Malinka Ivanova, Svetlin Stefanov Abstract - The growing number and increasing complexity of cyberattacks require investigative experts to use contemporary technologies for finding and analyzing digital evidence and for preparing computer expertise. Artificial intelligence (AI) and machine learning (ML) are among the possibilities for automating a number of routine activities in digital forensics, which can be performed significantly faster and more efficiently. The aim of the paper is to present the potential of AI and ML at analyzing digital evidence as in this case the extraction of text and image information from pdf files is specifically examined. A classification of different types of files that could potentially be located on the victim’s or attacker’s smartphone or computer is also performed using ML algorithm Decision Tree. Synthetically generated files and original scientific papers are utilized for the experiments. The findings point out that the obtained accuracy at classification of file formats, at analyzing and summarizing the content of pdf files is high, which is done thought applying Natural Language Processing techniques and Large Language Models.
Authors - Enaam Youssef, Mahra Al Malek, Nagwa Babiker Yousif, Soumaya Abdellatif Abstract - Social media algorithms are important in suggesting content aligned with users' needs. The relevant technology suggests content and ensures its suitability and relevance to users. Consequently, it is considered an important aspect of everyday life in enhancing community and cultural identity among youth. This research examines the effect of social media algorithms on the community and cultural identity of the young generation in the United Arab Emirates. Theoretically supported by Social Identity Theory, this research gathered data from 341 respondents using structured questionnaires. Results indicated that Social Media Algorithms positively affect Community Identity, implying that these platforms promote a sense of belonging by connecting them to local groups, discussions, and events, strengthening their cultural and social community ties. Results also revealed that the effects of social media algorithms on cultural identity remain positively significant. These findings indicate that social media content improves connection to cultural heritage and shapes cultural identity perceptions, although algorithms sometimes prioritize global over local practices. Overall, these results indicate a robust influence of social media in the UAE as a factor enabling the young generation to seek community identity and cultural belonging, which further helps them retain their overall social identity in the best possible manner. Study findings and limitations are discussed accordingly.
Authors - Hayat Bihri, Soukaina Sraidi, Haggouni Jamal, Salma Azzouzi, My El Hassan Charaf Abstract - Predictive analytics and artificial intelligence (AI) offer significant potential to improve healthcare, yet challenges in achieving interoperability across diverse settings, such as long-term care and public health, remain. Enhancing Electronic Health Records (EHRs) with multimodal data provides a more comprehensive view of patient health, leading to better decision-making and patient outcomes. This study proposes a novel framework for real-time cardiovascular disease (CVD) risk prediction and monitoring by integrating medical imaging, clinical variables, and patient narratives from social media. Unlike traditional models that rely solely on structured clinical data, this approach incorporates unstructured insights, improving prediction accuracy and enabling continuous monitoring. The methodology includes modality specific preprocessing: sentiment analysis and Named Entity Recognition (NER) for patient narratives, Convolutional Neural Networks (CNNs)for imaging, and Min-Max scaling with k-Nearest Neighbors (k-NN) imputation for clinical variables. A unique patient identifier ensures precise data fusion through multimodal transformers, with attention mechanisms prioritizing key features. Real-time monitoring leverages streaming natural language processing (NLP) to detect health trends from social media, triggering alerts for healthcare providers. The model undergoes rigorous validation using metrics like AUC-ROC, AUC-PR, Brier score, SHAP values, expert re-views, and clinical indicators, ensuring robustness and relevance.
Authors - Quoc Hung NGUYEN, Xuan Dao NGUYEN THI, Thanh Trung LE, Lam NGUYEN THI Abstract - With the rapid development of financial technology, financial product recommendation systems play an increasingly important role in enhancing user experience and reducing information search costs, becoming a key factor in the financial services industry. Amid growing competitive pressure, the diversification of user needs, and the continuous expansion of financial products, traditional recommendation systems reveal limitations, especially in terms of accuracy and personalization. Therefore, this study focuses on applying deep learning technology to develop a smarter and more efficient financial product recommendation system. We evaluate this model based on key metrics such as precision, recall, and F1-score to ensure a comprehensive assessment of the proposed approach's effectiveness. Methodologically, we employ the Long Short-Term Memory (LSTM) model, a type of Recurrent Neural Network (RNN) designed to address the challenge of long-term memory retention in time-series data. For the task of recommending the next loan product for customers, LSTM demonstrates its ability to remember crucial information from the distant past, thanks to its gate structure, including input, forget, and output gates. Additionally, the model leverages a robust self-attention mechanism to analyze complex relationships between user behavior and financial product information.