10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Sulafa Badi, Salam Khoury, Kholoud Yasin, Khalid Al Marri Abstract - This study investigates consumer attitudes toward Mobility as a Service (MaaS) in the context of the UAE's diverse population, focusing on the factors influencing adoption intentions. A survey of 744 participants was conducted to assess public perceptions, employing hierarchical and non-hierarchical clustering methods to identify distinct consumer segments. The analysis reveals five clusters characterised by varying demographics, travel lifestyles, and attitudes towards MaaS, highlighting the influence of UTAUT2 variables, including performance expectancy, social influence, hedonic motivation, price value, and perceived risk. Among the clusters, ‘Enthusiastic Adopters’ and ‘Convenience-Driven Adopters’ emerge as key segments with a strong reliance on public transport and a willingness to adopt innovative transportation solutions. The findings indicate a shared recognition of the potential benefits of MaaS despite differing opinions on its implementation. This research contributes to the theoretical understanding of MaaS adoption by offering an analytical typology relevant to a developing economy while also providing practical insights for policymakers and transport providers. By tailoring services to meet the unique needs of various consumer segments, stakeholders can enhance the integration of MaaS technologies into the UAE's transportation system. Future research should explore the dynamic nature of public sentiment regarding MaaS to inform ongoing development and implementation efforts.
Authors - Rakhi Bharadwaj, Priyanshi Patle, Bhagyesh Pawar, Nikita Pawar, Kunal Pehere Abstract - The detection of forged signatures is a critical challenge in various fields, including banking, legal documentation, and identity verification. Traditional methods for signature verification rely on handcrafted features and machine learning models, which often struggle to generalize across varying handwriting styles and sophisticated forgeries. In recent years, deep learning techniques have emerged as powerful tools for tackling this problem, leveraging large datasets and automated feature extraction to enhance accuracy. In this literature survey paper, we have studied and analyzed various research papers on fake signature detection, focusing on the accuracy of different deep learning techniques. The primary models reviewed include Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Generative Adversarial Networks (GANs). We evaluated the performance of these methods based on their reported accuracy on benchmark datasets, highlighting the strengths and limitations of each approach. Additionally, we discussed challenges such as dataset scarcity and the difficulty of generalizing models to detect different types of forgeries. Our analysis provides insights into the effectiveness of these methods and suggests potential directions for future research in improving signature verification systems.
Authors - Shilpa Bhairanatti, Rubini P Abstract - While the rollout of 5G cellular networks will extend into the next decade, there is already significant interest in the technologies that will form the foundation of its successor, 6G. Although 5G is expected to revolutionize our lives and communication methods, it falls short of fully supporting the Internet-of-Everything (IoE). The IoE envisions a scenario where over a million devices per cubic kilometer, both on the ground and in the air, demand ubiquitous, reliable, and low-latency connectivity. 6G and future technologies aim to create a ubiquitous wireless connectivity for entire communication system. This development will accommodate the rapidly increasing number of intelligent devices and communication demand. These objectives can be achieved by incorporating THz band communication, wider spectrum resources with minimized communication error. However, this communication technology faces several challenges such as energy efficiency, resource allocation, latency etc., which needs to be addressed to improve the overall communication performance. To overcome these issues, we present a roadmap for Point to Point (P2P) and Point-to-Multipoint (P2MP) communication where channel coding mechanism is introduced by considering Turbo channel coding scheme as base approach. Furthermore, deep learning based training is provided to improve the error correcting performance of the system. The performance of proposed model is measured in terms of BER for varied SNR levels and additive white noise channel distribution scenarios, where experimental analysis shows that the proposed coding approach outperformed existing error correcting schemes.
Authors - Hiep. L. Thi Abstract - A brief summary of the paper, highlighting key points such as the increasing role of UAVs in various sectors, the challenges related to data storage on UAVs, and proposed solutions for improving both the efficiency and security of data management. Include a note on the scope of the study, methodologies, and key findings.
Authors - Gareth Gericke, Rangith B. Kuriakose, Herman J. Vermaak Abstract - Communication architectures are demonstrating their significance in the development landscape of the Fourth industrial revolution. Nonetheless, the progress of architectural development lags behind that of the Fourth industrial revolution itself, resulting in subpar implementations and research gaps. This paper examines the prerequisites of Smart Manufacturing and proposes the utilization of a novel communication architecture to delineate a pivotal element, information appropriateness, showcasing its efficient application in this domain. Information appropriateness, leverages pertinent information within the communication flow at a machine level facilitating real-time monitoring, decision-making, and control over production metrics. The metrics scrutinized herein include production efficiency, bottleneck mitigation, and network intelligence, while accommodating architectural scalability. These metrics are communicated and computed at a machine level to assess the efficacy of a communication architecture at this level, while also investigating its synergistic relationship across other manufacturing tiers. Results of this ongoing study shed insights into data computation and management at the machine level and demonstrate an effective approach for handling pertinent information at critical junctures. Furthermore, the adoption of a communication architecture helps minimize information redundancy and overhead in both transmission and storage for machine level communication.
Authors - Y. Abdelghafur, Y. Kaddoura, S. Shapsough, I. Zualkernan, E. Kochmar Abstract - Early reading comprehension is crucial for academic success, involving skills like making inferences and critical analysis, and the Early Grade Reading Assessment (EGRA) toolkit is a global standard for assessing these abilities. However, creating stories that meet EGRA's standards is time-consuming and labour-intensive and requires expertise to ensure readability, narrative coherence, and educational value. In addition, creating these stories in Arabic is challenging due to the limited availability of high-quality resources and the language's complex morphology, syntax, and diglossia. This research examines the use of large language models (LLMs), such as GPT-4 and Jais, to automate Arabic story generation, ensuring readability, narrative coherence, and cultural relevance. Evaluations using Arabic readability formulas (OSMAN and SAMER) show that LLMs, particularly Jais and GPT, can effectively produce high-quality, age-appropriate stories, offering a scalable solution to support educators and enhance the availability of Arabic reading materials for comprehension assessment.
Authors - Robert Johnson, Jing Jung Zhang, Fu Kuo Manchu, Silvio Simani Abstract - With a focus on fixing the common problems of imbalance and misalignment, this study introduces an artificial intelligence tool based on a state-of-the-art deep learning method that will enhance automatic condition monitoring and fault detection for mechanical processes. The main breakthrough is a trustworthy model for condition monitoring using artificial neural networks that extract feature vectors from signal data using frequency analysis. A high fault detection accuracy rate highlights the research accomplishment, proving its ability to establish new solutions also for predictive maintenance. This research considers the different working conditions of a mechanical process by analysing four separate operational classes, including balanced operation, horizontal vertical misalignments, unbalanced situations, and regular operation. The dataset studied in this work includes a wealth of information and was carefully calibrated for neural network training, which has also the potential to be employed in the development of maintenance procedures for mechanical plants. Finally, this study provides a significant step towards the goals of improved performance and unyielding safety requirements that industries are aiming for.
Authors - Nejjari Nada, Chafi Anas, Kammouri Alami Salaheddine Abstract - The handicraft sector holds crucial importance within the Moroccan economy, serving as a fundamental pillar that significantly contributes to the country's economic balance. This sector not only preserves cultural heritage but also provides employment opportunities and sustains local economies. Our study primarily revolves around an in-depth exploration of the artisanal universe, aiming to derive relevant recommendations to optimize its performance and enhance its competitive edge. By focusing on identifying gaps, challenges, and opportunities within the sector, our goal is to develop concrete improvement suggestions that can catalyze continuous development and growth. Through a comprehensive analysis, we seek to provide actionable insights that can improve efficiency, sustainability, and the overall impact of the handicraft sector on the Moroccan economy. This research aspires to support policymakers, stakeholders, and artisans themselves in fostering a thriving and resilient artisanal industry that contributes robustly to economic and social development.
Authors - Ubayd Bapoo, Clement N Nyirenda Abstract - This study evaluates the performance of Soft Actor Critic (SAC), Greedy Actor Critic (GAC), and Truncated Quantile Critics (TQC) in high-dimensional decision-making tasks using fully observable environments. The focus is on parametrized action (PA) spaces, eliminating the need for recurrent networks, with benchmarks Platformv0 and Goal-v0 testing discrete actions linked to continuous actionparameter spaces. Hyperparameter optimization was performed with Microsoft NNI, ensuring reproducibility by modifying the codebase for GAC and TQC. Results show that Parameterized Action Greedy Actor-Critic (PAGAC) outperformed other algorithms, achieving the fastest training times and highest returns across benchmarks, completing 5,000 episodes in 41:24 for the Platform game and 24:04 for the Robot Soccer Goal game. Its speed and stability provide clear advantages in complex action spaces. Compared to PASAC and PATQC, PAGAC demonstrated superior efficiency and reliability, making it ideal for tasks requiring rapid convergence and robust performance. Future work could explore hybrid strategies combining entropy-regularization with truncation-based methods to enhance stability and expand investigations into generalizability.
Authors - Amalia Mukhlas, Shahrinaz Ismail, Bazilah A. Talip, Jawahir Che Mustapha, Juliana Jaafar Abstract - The pharmaceutical industry’s significant influence on other sectors underscores the urgency of implementing sustainable systems. Technology offers invaluable tools for achieving this goal. This study examines the challenges faced by pharmaceutical websites to function in collaboration and governance, emphasizing the difficulties in providing accessible and relevant information. Using experiential observation, it highlights governance-related inefficiencies in website design. The proposed solutions focus on improving the website usability for transparency, accountability and social involvement in line with sustainable systems practices. The findings disclose the suboptimal design of many pharmaceutical websites, hindering collaboration with external parties, specifically academic and potentially impact the industry’s sustainable efforts.
Authors - Sarah Abdalrahman Al-Shqaqi, Mohammed Zayed, Kamal Al-Sabahi, Adnan Al-Mutawkkil Abstract - The development of the Internet of Things (IoT) in recent years has significantly contributed to a paradigm change in all aspects of life. IoT has rapidly gained traction in a short period of time across a variety of sectors, including business, healthcare, governance, infrastructure management, consumer services, and even defense. IoT has the ability to monitor systems through the delivery of consistent and precise information. In medical services it enables us to make decisions, surrounding technologies will play an essential contribution to providing healthcare to people in remote locations. The health centers gather data from areas where cholera has emerged or is suspected, which is then sent to the ministry of public health and population. The world health organization in Yemen analyzes the data using two systems (Edews and Ewars), but they lack the full capability for early detection of cholera, and do not utilize Internet of Things technology, which plays an important role in solving most health problems, including cholera. To address this issue, we proposed a framework consists of 6 layers and add parameters that helps early detection of cholera disease. In this study, IoT framework was used for early detection of cholera, thereby assisting the ministry of public health and population and the world health organization in making informed decisions by add the intelligent medical server layer. Overall, this framework is applicable to any field, particularly healthcare for cholera.
Authors - Praveen Kumar Sandanamudi, Neha Agrawal, Nikhil Tripathi, Pavan Kumar B N Abstract - Over the past few years, the proliferation of resource-constrained devices and the looming of Internet-of-Things (IoT) for UAV applications has accentuated the need for lightweight cryptographic (LWC) algorithms. These algorithms are designed to be more suitable for UAV application based IoT devices as they are efficient in terms of memory usage, computation, utilization of power, etc. Based on the literature study, the algorithms mostly suitable for the UAV application based resource-constrained devices are identified in this paper. This list also includes ASCON cipher, winner of the NIST’s lightweight cryptography standardization contest. Furthermore, these algorithms have been implemented on a UAV application based Raspberry Pi 3 Model B to analyze their hardware and software performance w.r.t the essential metrics, such as latency, power consumption, throughput, energy consumption, etc., for different payloads. From the experimental results, it has been observed that the SPECK is optimized for software implementations and may offer better performance in certain scenarios, especially on UAV application based resource-constrained devices. ASCON, on the other hand, provides both encryption and authentication in a single pass, potentially reducing latency and overhead. This paper aims to assist researchers in pinpointing the most appropriate LWC algorithm tailored to specific scenarios and requirements.
Authors - Venkata Sai Varsha, Prodduturi Bhargavi, Samah Senbel Abstract - This paper provides a thorough analysis of U.S. Congressional history from the 66th to the 118th Congress, examining demographic trends, political shifts, and party dynamics across decades. Using Python-based data processing, the study compiles and interprets historical data to identify patterns in representative demographics, party representation, and legislative impact. The analysis investigates generational changes within Congress, with particular focus on age distribution, tenure, and shifts in political party dominance. Visualizations and statistical insights generated through Python libraries, such as Pandas, Matplotlib, and Seaborn, reveal significant historical events and socio-political influences shaping Congress. By examining age-related trends, the study highlights a generational gap, with older members retaining significant representation and a younger cohort gradually emerging. Additionally, it explores the evolution of bipartisan dominance and third-party representation, offering insights into political diversity and the resilience of the two-party system. This research contributes to the understanding of how demographic and political transformations within Congress reflect broader societal trends and may influence future governance.
Authors - Taufique Hedayet, Anup Sen, Mahfuza Akter Jarin, Shohel Rana Shaon, Joybordhan Sarkar, Sadah Anjum Shanto Abstract - A price hike is an atypical increase in the cost of an essential item. A price rise is an unusual increase in the prices of everyday basic goods. The price increase has several factors. Everyday items are becoming more and more expensive. In this research, we have used Bidirectional Long Short-Term Memory (BLSTM), Long Short-Term Memory (LSTM), Adaboost, Support Vector Regression (SVR), Gradient Boosted Regression Tree (GBRT), and REST API for forecasting the prices for necessary commodities and we will evaluate efficiency by the value of gold. Our preeminent objective is to find a method that can detect and predict price hike that can be much more accurate and efficient than the other approaches that are currently available in the relevant literature. The acceptance of the detection and prediction is based on their accuracy and efficiency. Price hike predictions may role important for everyday life for many stakeholders, including firms, consumers, and government. The energetic and sporadic character of advertising estimating is highlighted as a major foreseeing.
Authors - Sathvik Putta, Tejagni Chichili, Samah Senbel Abstract - Traffic accidents remain a critical issue globally, with significant implications for public health, safety, and economic stability. This study provides a comprehensive analysis of traffic accident trends in the northeastern United States, focusing on Connecticut and its neighboring states—New York, New Jersey, New Hampshire, and Massachusetts. By leveraging a dataset encompassing fatal collisions, driver behaviors, and car insurance premiums, this work investigates correlations between risky driving habits, accident outcomes, and the associated financial impacts. Key metrics analyzed include speeding-related incidents, alcohol-impaired driving, distracted driving, and their influence on insurance costs and claims. rigorous data preprocessing methodology was employed, including normalization, outlier detection, and feature selection, ensuring a robust and reliable dataset for analysis. The study used advanced visualization techniques and statistical modeling, utilizing Python libraries like Pandas, Matplotlib, and Scikit-learn, to identify trends and derive actionable insights. Comparative analysis reveals that while neighboring states such as Massachusetts and New York excel in certain safety metrics, Connecticut lags in addressing critical behavioral risks like speeding and alcohol impairment.
Authors - Dawngliani M S, Thangkhanhau H, Lalhruaitluanga Abstract - Breast cancer continues to pose a major public health challenge world-wide, necessitating the development of accurate prediction algorithms to improve patient outcomes. This study aimed to devise a predictive model for breast cancer recurrence using machine learning techniques, with data sourced from the Mizoram State Cancer Institute. Utilizing the Weka machine learning toolkit, a hybrid approach incorporating classifiers such as K-Nearest Neighbors (KNN) and Random Forest was explored. Additionally, individual classifiers including J48, Naïve Bayes, Multilayer Perceptron, and SMO were employed to evaluate their predictive efficacy. Voting ensembles are utilized to augment performance accuracy. The hybridization of Random Forest and KNN classifiers, along with other base classifiers, demonstrated notable improvements in predictive performance across most classifiers. In particular, the combination of Random Forest with J48 yielded the highest performance accuracy at 82.807%. However, the J48 classifier alone achieved a superior accuracy rate of 84.2105%, signifying its efficacy in this context. Thus, drawing upon the analysis of the breast cancer dataset from the Mizoram State Cancer Institute, Aizawl, it was concluded that J48 exhibits the highest predictive accuracy compared to alternative classifiers.
Authors - Cansu Cigdem EKIN, Mehmet Afsin YUCE, Emrah EKMEN, Gokay GOK, Ibrahim UGUR Abstract - This study presents a preliminary assessment of the reliability and validity of a technology acceptance model UTAUT (Unified Theory of Acceptance and Use of Technology) for KeyDESK, a health facility management system used in healthcare settings. The model evaluates key constructs of the UTAUT model to better understand the contextual adoption of health facility management systems. Data were collected from 2547 respondents comprising system operators and healthcare professionals who utilize the KeyDESK platform for task and service management. Reliability was assessed through internal consistency measures, which confirmed strong alignment across constructs. Convergent validity was established by evaluating shared variance and item relevance, while the distinctiveness of constructs was verified through cross-comparative analyses. Preliminary results suggest that all constructs fulfill reliability and validity criteria, ensuring the robustness of the measurement model. These results provide an empirical foundation for understanding user acceptance of health facility management systems and highlight areas for further model refinement. This study serves as a critical step towards conducting more comprehensive structural equation modeling (SEM) analyses in subsequent research.
Authors - Kwang Sik Chung, Jihun Kang Abstract - As distance education services develop, much research is being conducted to analyze learners' learning activities and provide a customized learning environment optimized for each individual learner. The personalized learning environment is basically determined based on learner-centered learning analytics. However, learning analysis research on learning content, which is the subject of interaction with learners, is insufficient. In order to recommend learning content to learners and provide the most appropriate learning evaluation method, learner's learning capability and the difficulty of the learning content must be appropriately analyzed. In this research, the relative learning difficulty of the learning contents and the learner is analyzed, and through this, the learner-relative learning contents difficulty is analyzed. For this purpose, educational (learning) contents Data, Learning Operational Data, Learner Personal Learning data, Peer Learner Group Data, and Learner Statistical Data are collected, stored at learning records storage server and analyzed by the Learning Analytics System with several Deep Learning models. Finally, we find the absolute difficulty of the subject, the relative difficulty of the subject, the relative difficulty of the peer learner group, the relative learning capability of a learner, the absolute learning capability of a learner, the learning contents relative difficulty level for each learner, and the absolute difficulty of the subject for each individual learner, and personalized learning contents are created and decide with them.
Authors - Libero Nigro, Franco Cicirelli Abstract - This paper proposes the Evolutionary Random Swap (ERS) clustering algorithm that extends the basic behavior of Random Swap (RS) by a population of candidate solutions (centroid configurations), preliminarily established through a proper seeding procedure, which provides the swap data points that RS uses in the attempting step of improving the current clustering solution. The new centroid solution improves the previous solution in the case it reduces the Sum of Squared Errors (SSE) index. ERS, though, can also be used to optimize (maximize), in not large datasets, the Silhouette (SI) coefficient which measures the degree of separation of clusters. High-quality clustering is mirrored by clusters with high internal cohesion and a high external separation. The paper describes the design of ERS that is currently implemented in parallel Java. Different clustering experiments concerning the application of ERS to both benchmark and real-world datasets are reported. Clustering results can be compared, for accuracy and execution time performance, to the use of the basic RS algorithm. Clustering quality is also checked with the application of other known algorithms.
Authors - Amir Ince, Saurav Keshari Aryal, Howard Prioleau Abstract - With the rise of social media, vast amounts of text, including code-switching, are being generated, presenting unique linguistic challenges for sentiment analysis. This study explores how existing models perform without fine-tuning to understand the challenges of analyzing code-switched data. We propose a prompt tuning approach based on generated versus human-labeled code-switched dataset. Our results show that the Few-shot technique and the Prompt Optimization Framework with Dataset Examples offer the most consistent performance, highlighting the importance of real-world examples and language-specific data in improving multilingual sentiment analysis. However, the studied models and technique do no exhibit the ability to significant triage sentiments for Hindi and Dravidian languages.
Authors - Hector Rafael Morano Okuno Abstract - The use of large language models (LLMs) has spread to various areas of knowledge. However, it is necessary to continue exploring them to determine their scope. In this work, an LLM is investigated to generate G-code programs for machine parts in Computer numerical control (CNC) milling machines. Prompt Engineering is employed to communicate with LLM, and a series of prompts are used to inquire about its scope. Among the results are the manufacturing operations that an LLM can program and the problems that arise in the developed G-codes. Finally, a sequence of steps is proposed to create G-codes using LLMs, and the prompt structures are shown to help users understand how the LLMs work when generating G-codes.
Authors - Hanaa Mohsin Ahmed, Muna Ghazi Abdulsahib Abstract - Fuzzy deep learning, which combines fuzzy logic and deep learning techniques to handle uncertainty and imprecision in the data as a first task and learn hierarchical representations of the data as a second task, is a promising method for feature data classification method with many usefully and important applications that meagres with several disciplines of knowledge. This work uses a fuzzy logic deep learning model to classify feature data on transmission casing data in specific. For the first time as an approach, fuzzy logic deep learning has been used to use transmission casing data, a well-known benchmark dataset application for classification tasks in specific. The results of the experiments show that the proposed model outperforms the deep learning-based classification model, classifying the transmission casing data with a higher accuracy of 100% and more robustness. We also go over potential future research directions for Transmission-based fuzzy deep learning feature data classification.
Authors - Titi Andriani, Chairul Hudaya, Iwa Garniwa Abstract - The transition toward more sustainable renewable energy sources has driven advancements in energy storage technology, including the development of Battery Energy Storage Systems (BESS). To improve the reliability and efficiency of BESS, implementing an effective monitoring system is essential, especially for detecting and diagnosing battery faults. The most commonly utilized methodologies for the diagnosis of faults in battery systems encompass knowledge-based, model-based, and data-based approaches. Artificial Intelligence (AI) holds significant potential to enhance fault diagnosis systems through predictive models capable of analyzing large datasets, identifying patterns, and forecasting potential faults. This work offers a thorough investigation of AI applications for BESS fault diagnosis, supported by an in-depth review of reliable sources such as Science Direct, IEEE Xplore, and Scopus. A total of 723 papers from scientific publications over the last five years were initially considered in this research. Following a rigorous screening process, including duplicate removal and the application of exclusion and inclusion criteria, 28 studies were selected for quantitative analysis. This study not only examines the types of faults that can be diagnosed but also assesses the challenges associated with recent advancements in this technology. In this context, the research identifies several aspects that have been applied within the theory of AI-based fault diagnosis for BESS and offers recommendations for further research. The results of this study are intended to aid in the creation of fault diagnosis systems that are more dependable and effective, which in turn will support the transition to cleaner and more sustainable energy.
Authors - Vasyl Yurchyshyn, Yaroslav Yurchyshyn Abstract - A living organism can be seen as a tool designed to perform specific functions, while both living and non-living matter represent distinct manifestations of nature. This work proposes considering living and non-living matter as physical systems, integrating existing scientific and technological advancements in the fields of physics, biology, and computer science. It suggests that scientific and technological developments in physical systems can also be applied to biological systems. The work addresses issues related to coding within living organisms and physical systems, and explores potential models for their functioning. The use of the golden ratio in living organisms and the potential benefits of applying these codes to physical systems are examined. Additionally, the refinement of physical quantities using the approaches discussed is addressed. Key issues in the modelling of living matter are highlighted, and various approaches to addressing these challenges are explored. The binary encoding and encoding based on π, e, and the golden ratio are considered.
Authors - Shahd Tarek, Ali Hamdi Abstract - Brain tumors represent one of the most critical health challenges due to their complexity and high mortality rates, necessitating early and precise diagnosis to improve patient outcomes. Traditional MRI interpretation methods rely heavily on manual analysis, which is timeconsuming, error-prone, and inconsistent. To address these limitations, this study introduces a novel deep attentional framework that integrates multiple Convolutional Neural Network (CNN) base models—EfficientNet- B0, ResNet50, and VGG16—within a Multi-Head Attention (MHA) mechanism for robust brain tumor classification. Convolutional features extracted from these CNNs are fed into the MHA as Query (Q), Key (K), and Value (V) inputs, enabling the model to focus on the most distinguishing features within MRI images. By leveraging complementary feature maps from diverse CNN architectures, the MHA mechanism generates more refined, attentive representations, significantly improving classification accuracy. The proposed approach classifies MRI images into four categories: pituitary tumor, meningioma, glioma, and no tumor. A dataset of 7,023 labeled MRI images was curated from public repositories, including Figshare, SARTAJ, and Br35H, with preprocessing steps to standardize dimensions and remove margins. Experimental results demonstrate the superior performance of individual CNNs—VGG16 achieving 97.25% accuracy, ResNet50 98.02%, and EfficientNet-B0 93.21%. Moreover, the ensemble model integrating VGG16, EfficientNet-B0, and ResNet50 achieves the highest accuracy of 98.70%, surpassing other ensemble configurations such as ResNet50 + VGG16 + EfficientNet-B0 (96.95%) and VGG16 + ResNet50 + EfficientNet-B0 (95.96%). These findings underscore the effectiveness of multi-level attention in refining predictions and provide a reliable, automated tool to assist radiologists. The proposed framework highlights the transformative potential of deep learning in medical imaging, streamlining clinical workflows, and enhancing healthcare outcomes.
Authors - Luis Puebla Rives, Connie Cofre-Morales, Miguel Rodriguez, Eduardo Puraivan, Marisela Vera Medinelli, Abigail Gonzalez, Ignacio Reyes, Karina Huencho-Iturra, Macarena Astudillo-Vasquez Abstract - This study analyzes the perception of both practicing and future English teachers regarding an activity designed under a didactic conceptual framework that uses SCRATCH as a tool to promote English language teaching to 4th grade primary school students. A survey was designed, validated by experts, and then applied to 28 participants. The reliability of the scale was analyzed, showing internal consistency of 0.96 and 0.99 using Cronbach’s alpha and G6, respectively. Implicative statistical analysis was used to explore the relationships between questions across different dimensions. The similarity tree identified two significant clusters with values of 0.6 and 0.54. The implicative graph and cohesive tree displayed implications with values exceeding 0.7. The findings highlight a high appreciation for the activity using SCRATCH, which is perceived as both viable and an effective facilitator of contextualized and meaningful learning.
Authors - La-or Kovavisaruch, Kriangkri Maneerat, Taweesak Sanpechuda, Krisada Chinda, Sodsai Wisadsud, Thitipong Wongsatho, Sambat Lim, Kamol Kaemarungsi, Tiwat Pongthavornkamol Abstract - The industrial sector in Thailand remains primarily characterized by traditional practices of Industry 2.0, which face significant challenges in transitioning to Industry 4.0. This research proposes a decentralized real-time location and status reporting system to address these issues. By utilizing Ultra-Wideband (UWB) technology combined with the Internet of Things (IoT), the newly developed "UWB Tag Plus" device eliminates the reliance on costly UWB gateways, instead transmitting data directly to cloud servers via 4G/5G networks. Implementing this system at an automotive parts assembly factory in Thailand reduced system costs by over 30%. The communications protocol between the tag and cloud server changed from IEEE 802.15.4 to TCP/IP, which enhanced operational flexibility. The proposed system makes advanced modernization more accessible for small and medium-sized enterprises. Furthermore, the "UNAI Data Analytic" tool provides real-time performance analytics for automated guided vehicles, empowering warehouse operators to optimize operations and improve efficiency.
Authors - Junichiro Ando, Satoshi Okada, Takuho Mitsunaga Abstract - Large Language Models (LLMs) like ChatGPT and Claude have demonstrated exceptional capabilities in content generation but remain vulnerable to adversarial jailbreak attacks that bypass safety mechanisms to output harmful content. This study introduces a novel jailbreak method targeting Autodefense, a multi-agent defense framework designed to detect and mitigate such attacks. By combining obfuscation techniques with the injection of harmless plaintext, our proposed method achieved a high jailbreak attack success rate (maximum value is 95.3%) across different obfuscation methods, which marks a significant increase compared to the ASR of 7.95% without our proposed method. Our experiments prove the effectiveness of our proposed method to bypass Autodefense system.
Authors - Hana Ulinnuha, Mukhlish Rasyidi, Yanti Tjong, Husna Putri Pertiwi, Wendy Purnama Tarigan, Michael Tegar Wicaksono Abstract - Recently, tourism villages are central to Indonesia’s tourism development strategy, contributing significantly to local, regional, and even national economic growth. With the increasing number of tourism villages, understanding tourists’ perspectives is essential for ensuring their sustainability. Tourist reviews on platforms provide valuable insights into their experiences and expectations. Sentiment analysis, widely used in tourism research, enables the extraction and identification of opinions from these unstructured data sources, offering a deeper understanding of visitor sentiments. This study employs Large Language Models (LLM) to analyze tourist reviews of Indonesian tourism villages. Unlike common methods, LLMs provide advanced capabilities for both sentiment analysis and the evaluation of the 4A tourism components—Attraction, Accessibility, Amenities, and Ancillary services. By examining positive, neutral, and negative reviews, the research identifies key factors that shape tourist experiences. The findings offer practical recommendations for tourism village managers, not only to enhances visitor satisfaction but also supports the government’s goal of fostering economic growth in tourism and rural areas. The study demonstrates the potential of LLM-based sentiment analysis as a valuable tool for advancing Indonesia's tourism industry.
Authors - Mark Bhunu, Timothy T Adeliyi Abstract - The proliferation of social media (SM) platforms has made them an integral part of our daily lives, significantly shaping how we interact and engage with the world. While SM offers benefits such as social connectedness and sup-port, its impact on the psychological health and well-being of young individuals has both positive and negative dimensions. Understanding these effects is essen-tial to developing strategies for mitigating the adverse outcomes associated with its use. A systematic literature review was conducted to explore the influence of SM usage on the mental health and well-being of young adults aged 18 to 35. Drawing insights from 25 publications across three databases, the study identified common themes related to SM's effects on this demographic. The findings reveal a correlation between SM use and mental health outcomes, with benefits includ-ing enhanced social support but also risks such as depression, anxiety, low self-esteem, and increased vulnerability to cyberbullying. These results highlight the urgent need for targeted interventions to address the negative consequences of SM on the mental health and overall well-being of young adults.
Authors - Md Fahim Afridi Ani, Abdullah Al Hasib, Munima Haque Abstract - This research explores the possibility of improving insect farming by integrating Artificial Intelligence (AI) unlocking the complicated relationship between butterflies and plants they pollinate to reconsider the way species are classified and helping to redraw farming practices for the butterflies. Traditional methods of butterfly classification are morphologically and behaviorally intensive, thus mostly very time-consuming to conduct considering that most of them have a high level of subjective interpretation. We therefore apply our approach to ecological interactions involving butterfly species and their respective plants for efficient data-driven solutions. This also focuses on the application of AI in making full benefits from butterfly farming, trying to determine where each species will be best located. The system will, therefore, classify and manage butterflies with much more ease, saving time and energy usually used in conventional classification methods hence on to the farmer or industrial client. The research deepens the understanding of insect-plant relationships for better forecasting of butterfly behavior and, therefore, healthier ecosystems through optimized pollination and habitat balance. For that purpose, a dataset of butterfly species and related plants was developed, on which machine learning models were applied, including decision trees, random forests, and neural networks. It tuned out that the neural network outperformed the others with an accuracy of 93%. Apart from classification, it helps in the identification of a habitat to provide the best conditions possible for the rearing of butterflies. Application of AI in this field simplifies the work of butterfly farming hence being an important tool to be used in improving growth and conservation of biodiversity. Integrating machine learning into ecological research and industry provides scalable, time-efficient solutions for the classification of species toward the sustainable farming of butterflies.
Authors - Zachary Matthew Alabastro, Stephen Daeniel Mansueto, Joseph Benjamin Ilagan Abstract - Product innovation is critical in strategizing business decisions in highly-competitive markets. For product enhancements, the entrepreneur must garner data from a target demographic through research. A solution to this involves qualitative customer feedback. The study proposes the viability of artificial intelligence (AI) as a co-pilot model to simulate synthetic customer feedback with agentic systems. Prompting with ChatGPT-4o’s homo silicus attribute can generate feedback on certain business contexts. Results show that large language models (LLM) can generate qualitative insights to utilize in product innovation. Results seem to generate human-like responses through few-shot techniques and Chain-of-Thought (CoT) prompting. Data was validated with a Python script. Cosine similarity tested the similarity of datasets to quantify the juxtaposition of synthetic and actual customer feedback. This model can be essential in reducing the total resources needed for product evaluation through preliminary analysis, which can help in sustainable competitive advantage.
Authors - Jeehaan Algaraady, Mohammad Mahyoob Albuhairy Abstract - Sarcasm, a sentiment often used to express disdain, is the focus of our comprehensive research. We aim to explore the effectiveness of various machine learning and deep learning models, such as Support Vector Machine (SVM), Recurrent Neural Networks (RNN), Bidirectional Long Short-Term Memory (BiLSTM), and fine-tuning pre-trained transformer-based mode (BERT) models, for detecting sarcasm using the News Headlines dataset. Our thorough framework investigates the impact of the DistilBert method for text embeddings on enhancing the accuracy of the DL models (RNN and LSTM) for training and classification. To assess the highest values of the proposed models, the authors utilized the four-performance metrics: F1 score, recall, precision, and accuracy. The outcomes revealed that incorporating the BERT model achieves outstanding performance and outperforms other models for an impressive sarcasm classification with a state-of-the-art F1 score of 98%. The outcomes revealed that the F1 scores for SVM, BiSLTM, and RNN are 93%, 95.05%, and 95.52%, respectively. Our experiment on the News Headlines dataset demonstrates that incorporating Distil-Bert to process the word vector enhances the performance of RNN, and BiLSTM notably improves their accuracy. The accuracy of the BiLSTM and RNN models when incorporating FT-IDT, Word2Vec, and GLoVe embeddings scored 93.9% and 93.8%, respectively. In contrast, these scores increased to 95.05% and 95.52% when these models incorporated Distil-Bert for text embedding. This augmentation can be recognized to the capability of Distil-Bert to acquire contextual information and semantic relationships between words, thereby enriching the word vector representation.
Authors - Lois Abigail To, Zachary Matthew Alabastro, Joseph Benjamin Ilagan Abstract - Customer development (CD) is a Lean Startup (LS) methodology for startups to validate their business hypotheses and refine their business model based on customer feedback. This paper proposes designing a large language model-based multi-agent system (LLM MAS) to enhance the customer development process by simulating customer feedback. Using LLMs’ natural language understanding (NLU) and synthetic multi-agent capabilities, startups can conduct faster validation while obtaining preliminary insights that may help refine their business model before engaging with the real market. The study presents a model in which the LLM MAS simulates customer discovery interactions between a startup founder and potential customers, together with design considerations to ensure real-world accuracy and alignment with CD. If carefully designed and implemented, the model may serve as a useful co-pilot that accelerates the customer development process.
Authors - Prince Kelvin Owusu, George Oppong Ampong, Joseph Akwetey Djossou, Gibson Afriyie Owusu, Thomas Henaku, Bless Ababio, Jean Michel Koffel Abstract - In today's dynamic digital landscape, understanding customer opinions and sentiments has become paramount for businesses striving to maintain competitiveness and foster customer loyalty. However, the banking sector in Ghana faces challenges in effectively harnessing innovative technologies to grasp and respond to customer sentiments. This study aims to address this gap by investigating the application of ChatGPT technology within Ghanaian banks to augment customer service and refine sentiment analysis in real-time. Employing a mixed-method approach, the study engaged (40) representatives including IT specialists, data analysts, and customer service managers from (4) banks in Ghana through interviews. Additionally, (160) customers, (40) from each bank, participated in a survey. The findings revealed a significant misalignment between customer expectations and current service provisions. To bridge this gap, the integration of ChatGPT technology is proposed, offering enhanced sentiment analysis capabilities. This approach holds promise for elevating customer satisfaction and fostering loyalty within Ghana's competitive banking landscape.
Authors - Japheth Otieno Ondiek, Kennedy Ogada, Tobias Mwalili Abstract - This experiment models the implementation of distance metrics and three-way decisions for K-Nearest Neighbor classification (KNN). KNN as a machine learning method has inherent classification deficits due to high computing power, outliers and the curse of dimensionality. Many researchers have experimented and found that a combination of various algorithmic methods can lead to better results in prediction and forecasting fields. In this experimentation, we used the combination and strengths of the Euclidean metric distance to develop and evaluate computing query distance for nearest neighbors using weighted three-way decision to model a highly adaptable and accurate KNN technique for classification. The implementation is based on experimental design method to ascertain the improved computed Euclidean distance and weighted three-way decisions classification to achieve better computing power and predictability through classification in the KNN model. Our experimental results revealed that distance metrics significantly affects the performance of KNN classifier through the choice of K-Values. We found that K-Value on the applied datasets tolerate noise levels to ascertain degree while some distance metrics are less affected by the noise levels. This experiment primarily focused on the findings that best K-value from distance metrics measure guarantees three way KNN classification accuracy and performance. The combination of best distance metrics and three-way decision model for KNN classification algorithm has shown improved performance as compared with other conventional algorithm set-ups making in more ideal for classification in the context of this experiment. It outperforms KNN, ANN, DT, NB and the SVM from the crop yielding datasets applied in the experiment.
Authors - Mmapula Rampedi, Funmi Adebesin Abstract - The healthcare sector has generally been reluctant to adopt digital technologies. However, the COVID-19 pandemic pushed the industry to accelerate its digital transformation. Digital twins, a virtual replica of human organs or the entire human body, is revolutionizing healthcare and the management of healthcare resources. Digital twins can improve the accuracy of patients’ diagnoses through access to their virtual replica data. This enables healthcare professionals to make informed decisions about patients’ conditions and treatment options. This paper presents the results of a systematic literature review that investigated how digital twins are being utilized in the healthcare sector. A total of 6,714 papers published between 2019 and April 2024 were retrieved from four databases using specific search terms. A screening process based on inclusion and exclusion criteria resulted in a final set of 34 studies that were analyzed. The qualitative content analysis of the 34 studies resulted in the identification of five themes namely; (i) the technologies that are integrated into digital twins; (ii) the medical specialties where digital twins are being used; (iii) the different application areas of digital twins in healthcare; (iv) the benefits of the application of digital twins in healthcare and; (v) the challenges associated with the use of digital twins in healthcare. The outcome of the study showcased the potential for the adoption of digital twins to revolutionize healthcare service delivery by mapping the medical specialties of use to the different application areas. The study also highlights the benefits and challenges associated with the adoption of digital twins in the healthcare sector.
Authors - Nazli Tokatli, Mucahit Bayram, Hatice Ogur, Yusuf Kilic, Vesile Han, Kutay Can Batur, Halis Altun Abstract - This study aims to create deep learning models for the early identification and classification of brain tumours. Models like U-Net, DAU-Net, DAU-Net 3D, and SGANet have been used to evaluate brain MRI images accurately. Magnetic resonance imaging (MRI) is the most commonly used method in brain tumour diagnosis, but it is a complicated procedure due to the brain’s complex structure. This study looked into the ability of deep learning architectures to increase the accuracy of brain tumour diagnosis. We used the BraTS 2020 dataset to segment and classify brain tumours. The U-Net model designed for the project achieved an accuracy rate of 97% with a loss of 47%, DAU-Net reached 90% accuracy with a loss of 33%, DAU-Net 3D achieved 99% accuracy with a loss of 35%, and SGANet achieved 99% accuracy with a loss of 20%, all demonstrating effective outcomes. These findings aim to improve patient care quality by speeding up medical diagnosis processes using computer-aided technology. Doctors can detect 3D tumours from MRI pictures using software developed as part of the research. The work packages correctly handled project management throughout the study’s data collection, model creation, and evaluation stages. Regarding brain tumour segmentation, 3D U-Net architecture with multi-head attention mechanisms provides doctors with the best tools for planning surgery and giving each patient the best treatment options. The user-friendly Turkish interface enables simple MRI picture uploads and quick, understandable findings.
Authors - Radford Burger, Olawande Daramola Abstract - Clinical Decision Support Systems (CDSS) have the potential to significantly improve healthcare quality in resource-limited settings (RLS). Despite evidence supporting the effectiveness of CDSS, their adoption and implementation rates remain low in RLS due to low levels of computer literacy among health workers, fragmented and unreliable infrastructure, and technical challenges. A thorough understanding of requirements is critical for the design of CDSS, which will be relevant to RLS. This paper explores the elicitation and prioritisation of requirements of a CDSS tailored to gait-related diseases in RLS. To do this, we conducted a qualitative literature analysis to identify potential requirements. After that, the requirements were presented to gait analysis experts for revision and prioritisation using the MoSCoW requirements prioritisation technique. The analysis of the results of the prioritisation process shows that for the functional requirements, 59.1% are fundamental and essential (Must Have), 36.3% are important but not fundamental (Should Have), 4.5% are negotiable requirements that are nice-to-have, but not important or fundamental (Could Have). All the non-functional requirements (100%) that pertain to usability and security were considered fundamental and essential (Must Have). This study provides a solid foundation for understanding the requirements of CDSS that are tailored to gait-related diseases in RLS. It also provides a guide for software developers and re-searchers on the design choices regarding the development of CDSS for RLS.
Authors - Omar Ahmed Abdulkader, Bandar Ali Alrami Al Ghadmi, Muhammad Jawad Ikram Abstract - In an era characterized by escalating digital threats, cybersecurity has emerged as a paramount concern for individuals and organizations globally. Traditional security measures, often reliant on centralized systems, face significant challenges in combating increasingly sophisticated cyberattacks, leading to substantial data breaches, financial losses, and erosion of trust. This paper investigates the transformative potential of blockchain technology as a robust solution to enhance cybersecurity frameworks. By leveraging the core principles of blockchain—decentralization, transparency, and immutability—this study highlights how blockchain can address critical cybersecurity challenges. For instance, the use of blockchain for data integrity ensures that information remains unaltered and verifiable, significantly reducing the risk of tampering. Furthermore, decentralized identity management systems can provide enhanced security against identity theft and phishing attacks, allowing users to maintain control over their personal information. Through a review of current applications and case studies, this paper illustrates successful implementations of blockchain in various sectors, including finance, healthcare, and supply chain management. Notable results include a reported 30% reduction in fraud rates within financial transactions utilizing blockchain technology and a marked improvement in incident response times due to the transparency and traceability offered by blockchain solutions. Despite its promising applications, this paper also addresses existing challenges, such as scalability issues that can hinder transaction speed, regulatory concerns that complicate implementation, and technical complexities that require specialized knowledge. These barriers pose significant obstacles to the widespread adoption of blockchain in cybersecurity. In conclusion, this paper emphasizes the need for further research and development to overcome these challenges and optimize the integration of blockchain within cybersecurity frameworks. By doing so, we can foster a safer digital environment and enhance resilience against the evolving landscape of cyber threats.
Authors - Catia Silva, Nelson Zagalo, Mario Vairinhos Abstract - The preservation of cultural heritage, crucial for maintaining cultural identity, is increasingly threatened by natural degradation and socio-economic changes. Cultural tourism, supported by information and communication technologies, has become a key strategy for sustaining and promoting heritage sites. However, research on the most effective digital elements for amplifying tourist engagement remains limited. To address this gap, the present study explored the use of the Cultural Engagement Digital Model, which integrates participatory activities through game, narrative, and creativity elements, to enhance visitor engagement at cultural sites. The study focused on designing and testing three prototypes for Almeida, a historical village in in Guarda, Portugal, involving both visitors and interaction design experts to evaluate user preferences regarding the proposed activities. The findings of this study indicate that activities aligned with participatory dimensions can effectively engage users. These results help to solidify the model as a valuable instrument for designing mobile applications capable of promoting tourist engagement.
Authors - Juliana Silva, Pedro Reisinho, Rui Raposo, Oscar Ribeiro, Nelson Zagalo Abstract - As global life expectancy rises and the population of older adults in-creases, a higher prevalence of age-related diseases, such as dementia, is being observed. However, dementia-like symptoms are not exclusively caused by neurodegenerative conditions; pseudodementia, associated with late-life depression, can mimic the symptoms of dementia but may be potentially reversible with appropriate interventions. Despite this, individuals with pseudodementia still have a higher risk of progressing to neurodegenerative dementia. To counteract this possibility and aid in symptom reversal, non-pharmacological interventions may be a potential treatment. The present case study explored the feasibility of promoting storytelling through virtual reminiscence therapy in an older adult with pseudodementia, while also assessing the level of technological acceptance. The intervention included two sessions: one using a digital memory album and an-other utilizing 360º videos of personally significant locations. The results support the viability of using virtual reality as a therapeutic instrument to stimulate reminiscence and promote storytelling with a manageable learning curve and without inducing symptoms of cybersickness.
Authors - Omar Hamid, Homaiza Saud Ahmad, Ahmed Albayah, Fatima Dakalbab, Manar Abu Talib, Qassim Nasir Abstract - The science of photogrammetry has been developing rapidly in recent years. With the rise of tools adopting this science and the advancement of computer vision technologies, the potential of such software is being acknowledged by researchers and integrated by market professionals into various fields. To cope with the rapid changes and expanding range of photogrammetry tools, a methodology was developed to identify the most widely adopted software tools, whether open-source or commercial, by the research community and market professionals. This resulted in the identification of 37 tools for which we developed a comprehensive review and presented our findings through visualizations such as pie charts and graphs. Furthermore, a comparison between the tools was carried out based on seven different attributes describing them, in order to assist professionals and individuals in picking software for specific use cases.
Authors - Mona Kherees, Karen Renaud, Dania Aljeaid Abstract - Smart Tourism is the most rapidly expanding economic sector, with data serving as the foundation of all Smart Tourism operations when travelers participate in various tailored travel services before, during, and after their journeys. The massive volume of data collected through various Smart Tourism Technologies raises tourists’ concerns. They might adopt privacy-preserving behaviors, like restricting sharing, fabricating data, or refusing to disclose requested information. Consequently, service providers manipulate users into disclosing personal data by employing persuasive marketing techniques based on Cialdini’s principles. This research aimed to investigate how the persuasion strategies of Cialdini employed by tourism organizations or service providers influence privacy concerns and users’ willingness to share personal information. A mixed-methods approach, incorporating expert reviews, was utilized to propose and validate a framework based on the Antecedents-Privacy Concerns-Outcome (APCO) model.
Authors - Otshepeng Lethabo Malebye, Tevin Moodley Abstract - This paper explores integrating Knowledge Management principles within Project Management frameworks to address critical challenges project teams face, such as diminishing individual experience and employees applying their knowledge to the projects in which they are a part. This paper identifies common problems encountered in knowledge sharing, such as tacit knowledge externalisation and documentation within project environments, by exploring the KM principles and their relevance to project success. A proposed solution is presented by looking at existing systems, such as DocuWare and frameworks, Knowledge Management, and Project Management. This paper introduces a framework to demonstrate the significance of employing systematic processes for identifying, capturing, sharing, and applying knowledge within project teams. It utilises techniques such as interviews, post-project reviews, communities of practice, and training. By using the integrated approach, the proposed solution aims to solve knowledge silos, facilitate tacit knowledge externalisation, and improve knowledge documentation.
Authors - Ahamed Nishath S, Murugeswari R Abstract - Researchers in the field of artificial intelligence are increasingly interested in exploring how to spot and counteract the spread of fake news. When compared to machine learning approaches, deep learning methods are superior in terms of their ability to reliably identify instances of false news. This study analyses the efficacy of various neural network topologies in the classification of news items into two distinct categories: false and real. This work takes into a hybrid model that merges both CNN and RNN layers incorporate with multi-channel mechanism, Which is the most complex model. When determining model’s overall performance, criteria such as accuracy, precision, and recall rates are taken into consideration. According to the findings, the hybrid model is able to efficiently attain a high degree of accuracy, particularly 99.16% of the target accuracy. The aforementioned results highlight the adaptability of various neural network designs in the context of distinguishing between real and false news, hence revealing key insights that have the potential to be implemented in practical scenarios involving the verification of information and the evaluation of its validity.
Authors - Massimo Carlini, Giuseppina Anatriello, Elisabetta Cicchiello Abstract - The modern business context and the amount of data available to companies and organizations has made decision-making processes even more com-plex and articulated. This pushes companies to provide a better product or service for customers, reasoning in terms of quality, flexibility and responsiveness to their requests and needs. In this context, the concepts of customer centricity and satisfaction are placed, or the need for companies to try to satisfy demand by offering efficient and quality treatment aimed at satisfying customer needs based on a deep and solid knowledge of them. This paper reports on the activities carried out by Anas S.p.A., by Customer Ser-vice, over the last few years, to improve the Digital Customer Experience, making available to customers the knowledge and experience acquired over the years. The objective, in terms of Customer Centricity, was to put the customer at the center of the offer, providing them with more modern, innovative, intelligent and efficient dialogue tools.
Authors - Aniko Vagner Abstract - NoSQL databases are grouped into many categories, one of them is the key-value databases. Our goal is to examine whether a system-independent key-value logical model exists. The idea came from the Redis database, which has the opaque key-value type named string, but it supports lists, hashes, sets, sorted sets, etc. If we compare them to the document databases storing JSON documents, they can have a system-independent logical model. We gathered databases said to fall into the key-value category and read their documentation considering the stored data structures. We found many subcategories under the key-value category. We found that the clean key-value databases with buckets can have a system-independent database model where the buckets collect the key-value pair, and the model is so easy. We could not identify a system independent logical model for the rest subcategories. Additionally, we recognised some viewpoints on which the data model of the key-value databases can be examined. Altogether, considering all subcategories we cannot speak about a system-independent logical data model for key value databases.
Authors - Helvira Maharani Tresnadi, Rannie Oges Pebina, Permata Chandra Lagitha, Nurul Sukma Lestari Abstract - This research aims to analyze the relationships between career calling, adaptability, and awareness of STARA technology to provide insights into career development during this critical transition phase. The methodology employed in this research is quantitative, with data collected through online questionnaire surveys. The data was analyzed using partial least squares structural equation modeling (PLS-SEM) and Smart PLS software. The participants are students in Jakarta, with 413 respondents completing the survey. The findings indicate that both career exploration and self-efficacy have a positive influence on career adaptability. Furthermore, career exploration and self-efficacy significantly and positively affect career calling, while career calling positively affects career adaptability. The results also indicate that STARA Awareness reduces the influence of career calling on career adaptability, although the findings remain significant. The mediating variable demonstrates a positive and significant effect on the relationship between career exploration, self-efficacy, and career adaptability. The novelty of this research is that it examines career calling in school children, which is still rarely studied compared to employees, to help students recognize their potential and interests early on. For future research, it is recommended to investigate variables within a broader scope at the national and international levels.
Authors - ThiTuyetNga Phu, HongGiang Nguyen Abstract - Inspecting the compressive strength of buildings' concrete is essential for ensuring the safety of households. This paper examined the study samplers collected using the nondestructive testing (NDT) method combined with Ultrasonic Pulse Velocity (UPV) and Rebound Hammer (RH) tests to check the beams of some apartments over 30 years old. Firstly, research samples were deployed to analyze the level of data variation using the exploratory data analysis (EDA) method to assess the reliability and correlation of data samples. Next, the study focused on the prediction of concrete compressive strength deploying five functions of activation (AF) (tanhLU, tanh, leakyLU, reLU, and sigmoid) by using two deep learning models as long short-term memory (LSTM) and gated recurrent unit (GRU). Lastly, the experimental results showed that the GRU model combined with two kinds of hybrid AFs gave a fairly accurate prediction level; in contrast, the remaining AF showed acceptable results.
Authors - Prathyush Kiran Holla, Manish M, Purvi Hande, Akshay Anand, Nirmala Devi M Abstract - Integrated Circuits (IC) allows attackers to insert malicious implants called Hardware Trojans (HT). These Trojans leak information or alter circuit functionality. This threat is particularly critical in IoT devices, where compromised hardware can lead to drastic consequences across networks potentially exposing entire systems to data loss. Over the past decade, numerous Hardware Trojan Detection (HTD) methods have been developed which is crucial for securing IoT ecosystems, where detecting hardware-level threats early can prevent cascade failures. Current HTD techniques still face challenges with detection accuracy, class imbalance handling and high false positive/negative rates. We propose a HTD method using XGBoost, enhanced with focal loss to better handle class imbalance. XGBoost is combined with both graph-based and structural features to achieve higher accuracy compared to using each feature type individually. This approach is particularly valuable for IoT applications, where interconnected systems require robust detection methods. The proposed model, evaluated on an extensive dataset comprising of 41 combinational and sequential benchmark circuits, achieves an impressive accuracy of 98.85%, demonstrating superior performance in HT detection across diverse circuit architectures. Such high accuracy is essential for IoT deployments where false positives can trigger unnecessary disruptions across connected systems, and false negatives can leave critical infrastructure vulnerable to attacks.
Authors - Andriy Tevjashev, Oleksii Haluza, Dmytro Kostaryev, Anton Paramonov, Natalia Sizova Abstract - The study focuses on estimating the accuracy of aircraft positioning using an infocommunication network of optical-electronic stations (OES). The problem addressed is the numerical estimation of the shape and boundaries of the region where the aircraft is located, with a given probability, at any fixed time during video surveillance in optical and infrared frequency ranges. The method departs from the traditional assumption of normal distribution for random errors in aircraft location estimates and employs Chebyshev's inequality to construct upper bounds for the uncertainty region. It is shown that the dispersion ellipsoid, often used to estimate the metrological characteristics of OES, is a rough approximation of the actual region where the aircraft is located with a given probability. The following results were obtained: – a method for constructing the actual uncertainty region of an aircraft’s location, based on the statistical properties of random errors in video surveillance from each OES and their relative spatial arrangement to the aircraft at each surveillance moment; – a software implementation of the numerical method for constructing and visualizing upper estimates of the shape and boundaries of the uncertainty region in aircraft positioning, using the OES network for trajectory measurements.
Authors - Qian Jiang, Kin Wai Michael Siu, Jiannong Cao Abstract - Immersive technologies, including augmented reality (AR), virtual reality (VR), and mixed reality (MR), are widely used in exhibitions to engage audiences. This study examines immersive technologies in the context of museum learning with a focus on exhibitions. This study screened and analyzed 104 research papers in this scope closely related to the topic of immersive technologies and museums, which were selected based on search results for four keywords-human behavior, immersive technologies, exhibitions, and embedded experiences-to clarify the impact of immersive technologies on visitor behavior from existing exhibition themes. We conceptualized immersive technologies and categorized the literature according to theme and technology to clarify the relationship between immersive technology applications and exhibition topics. Existing research identifies a positive correlation between immersive technology and positive visitor experiences; however, there is less research on immersive technology and museum learning for special populations, and assessment tools for evaluating the effectiveness of technological application in this context have yet to be tested. The method of co-occurrence is used to analyze what factors need to be considered for the application of immersive technologies in the context of museum learning. Ultimately, a framework for immersive technological application is summarized.
Authors - Unaizah Mahomed, Machdel Matthee Abstract - The use of Professional Social Media Platforms (PSMPs) has become more popular in recent years. As COVID-19 spread globally the world was forced to fast-track digitalisation, remote and hybrid working models as well as the need for online hiring. This systematic literature review aims to give insight into understanding the role of artificial intelligence (AI) algorithms in professional social media platforms as well as gauge a deeper understanding for the need of these AI algorithms. This systematic literature review incorporates findings from previously published peer-reviewed literature to understand how AI-driven systems are used to improve hiring through professional social media platforms. The contents of this review address benefits related to hiring that include but is not limited to, the applications of AI algorithms in PSMPs, candidate screen and sourcing, job matching, and efficiency, as well as some concerns such as algorithmic bias, user privacy, regulations and ethical considerations. Significant effects on stakeholders have also been addressed within this review as well as the gaps within the research.
Authors - Indika Udagedara, Brian Helenbrook, Aaron Luttman Abstract - This paper presents a reduced order modeling (ROM) approach for radiation source identification and localization using data from a limited number of sensors. The proposed ROM method comprises two primary steps: offline and online. In the offline phase, a spatial-energetic basis representing the radiation field for various source compositions and positions is constructed. This is achieved using a stochastic approach based on principal component analysis and maximum likelihood estimation. The online step then leverages these basis functions for determining the complete radiation field from limited data collected from only a few detectors. The parameters are estimated using Bayes rule with a Gaussian prior. The effectiveness of the ROM approach is demonstrated on a simplified model problem using noisy data from a limited number of sensors. The impact of noise on the model’s performance is analyzed, providing insights into its robustness. Furthermore, the approach was extended to real-world radiation detection scenarios, demonstrating that these techniques can be used to localize and identify the energy spectra of mixed radiation sources, composed of several individual sources, from noisy sensor data collected at limited locations.
Authors - Shima Pilehvari, Wei Peng, Yasser Morgan, Mohammad Ali Sahraian, Sharareh Eskandarieh Abstract - Overfitting is a common problem during model training, particularly for binary medical datasets with class imbalance. This research specifically addresses this issue in predicting Multiple Sclerosis (MS) progression, with the primary goal of improving model accuracy and reliability. By investigating various data resampling techniques, ensemble methods, feature extraction, and model regularization, the study thoroughly evaluates the effectiveness of these strategies in enhancing stability and performance for highly imbalanced datasets. Compared to prior studies, this research advances existing approaches by integrating Kernel Principal Component Analysis (KPCA), moderate under-sampling, Synthetic Minority Oversampling Technique (SMOTE), and post-processing techniques, including Youden’s J Statistic and manual threshold adjustments. This comprehensive strategy significantly reduced overfitting while improving the generalization of models, particularly the Multilayer Perceptron (MLP), which achieved an Area Under the Curve (AUC) of 0.98—outperforming previous models in similar applications. These findings establish important best practices for developing robust prognostic models for MS progression and underscore the importance of tailored solutions in complex medical prediction tasks.
Authors - Ain Nadhira Mohd Taib, Fauziah Zainuddin, M. Rahmah Abstract - This paper presents AdaptiCare4U, an interactive mental health assessment in high school settings. By integrating adaptive technique with an establish mental health assessment instrument in a user-friendly format, Adap-tiCare4U improves the experience in answering mental health assessment. Through expert review validation technique, AdaptiCare4U demonstrates high effectiveness in accessibility, ease of use, and practical value with mean scores of 5, 4.2, and 4.4 respectively. Additionally, students’ perception further supports the tool’s usability, with positive feedback highlighting its engaging interface, use of multimedia elements, and stress-reducing design. A favorable usability rating from both students and experts makes AdaptiCare4U a promising tool for aiding counselors in conducting efficient mental health assessments.
Authors - Aayush Kulkarni, Mangesh bedekar, Shamla Mantri Abstract - This paper proposes a novel serverless computing model that addresses critical challenges in current architectures, namely cold start latency, resource inefficiency, and scalability limitations. The research integrates advanced caching mechanisms, intelligent load balancing, and quantum computing techniques to enhance serverless platform performance. Advanced distributed caching with coherence protocols is implemented to mitigate cold start issues. An AI-driven load balancer dynamically allocates resources based on real-time metrics, optimizing resource utilization. The integration of quantum computing algorithms aims to accelerate specific serverless workloads. Simulations and comparative tests demonstrate significant improvements in latency reduction, cost efficiency, scalability, and throughput compared to traditional serverless models. While quantum integration remains largely theoretical, early results suggest potential for substantial performance gains in tasks like function lookups and complex data processing. This research contributes to the evolving landscape of cloud computing, offering insights into optimizing serverless architectures for future applications in edge computing, AI, and data-intensive fields. The proposed model sets a foundation for more efficient, responsive, and scalable cloud solutions.
Authors - Nouha Arfaoui, Mohmed Boubakir, Jassem Torkani, Joel Indiana Abstract - The increasing reliance on surveillance systems and the vast amounts of video data have created a growing need for automated systems to detect violent and aggressive behaviors in real-time. Manual video analysis is not only labor-intensive but also prone to errors, particularly in large-scale monitoring situations. Machine learning and deep learning have gained significant attention for their ability to enhance the detection accuracy and efficiency of violence in images and videos. Violence is a critical societal issue, occurring in public spaces, workplaces, and social environments, and is a leading cause of injury and death. While video surveillance is a key tool for monitoring such behaviors, manual monitoring remains inefficient and subject to human fatigue. Early ML methods relied on manual feature extraction, which limited their flexibility in dynamic scenarios. Ensemble techniques, including AdaBoost and Gradient Boosting, provided improvements but still required extensive feature selection. The introduction of deep learning, particularly Convolutional Neural Networks (CNNs), has enabled automatic feature learning, making them more effective in violence detection tasks. This study focuses on detecting violence and aggression in workplace settings by addressing key aspects such as violent actions, and aggressive objects, utilizing various deep learning algorithms to identify the most efficient model for each task.
Authors - Kalupahanage A. G. A, Bulathsinhala D.N, Herath H.M.S.D, Herath H.T.M.T, Shashika Lokuliyana, Deemantha Siriwardana Abstract - The explosive growth of the Internet of Things (IoT) has had a substantial impact on daily life and businesses, allowing for real-time monitoring and decision-making. However, increased connectivity also brings higher security risks, such as botnet attacks and the need for stronger user authentication. This research explores how machine learning can enhance Internet of Things security by identifying abnormal activity, utilizing behavioral biometrics to secure cloud-based dashboards, and detecting botnet threats early. Researchers tested numerous machine learning methods, including K-Nearest Neighbors (KNN), Decision Trees, and Logistic Regression, on publicly available datasets. The Decision Tree model earned an impressive accuracy rate of 73% for anomaly identification, proving its supremacy in dealing with complex security risks. Research findings show the effectiveness of these strategies in enhancing the security and reliability of IoT devices. This study provides significant insights into the use of machine learning to protect Internet of Things devices while also addressing crucial concerns such as power consumption and privacy.
Authors - Franciskus Antonius Alijoyo, N Venkatramana, Omaia AlOmari, Shamim Ahmad Khan, B Kiran Bala Abstract - The Internet of Things (IoT) is becoming a crucial component of many industries, from smart cities to healthcare, in today's networked world. IoT devices are becoming more and more susceptible to security risks, especially zero-day (0day) attacks, which take advantage of undiscovered flaws. The dynamic and dispersed nature of these systems makes it difficult to identify and mitigate these assaults in IoT contexts. This research focuses on a deep learning model that was created and put into use with Python software. It was made especially to do a detection job with great accuracy. The proposed Autoencoder (AE) with Attention Mechanism model demonstrates exceptional performance in detecting zero-day attacks, achieving an accuracy of 99.45%, precision of 98.56%, recall of 98.53%, and an F1 score of 98.21%. The involvement of the attention mechanism helps to focus on the most relevant features, enhancing its efficiency and reducing computational overhead, making it a promising solution for real-time security applications in IoT systems. Compared to previous methods, such as STL+SVM and AE+DNN, the proposed model significantly outperforms the methods. These results highlight its superior ability to identify anomalies with minimal false positives. Because of its resilience, the model is very good at making zero-attacks. The results demonstrate how deep learning may improve IoT systems' security posture by offering proactive, real-time protections against zero-day threats, resulting in safer and more robust IoT environments.
Authors - Freedom Khubisa, Oludayo Olugbara Abstract - This paper presents the development and evaluation of an artificial intelligent (AI) driven web application for detecting maize diseases. The AI application was designed according to the design science methodology to offer accurate and real-time detection of maize diseases through a user-friendly interface. The application used Flask framework and Python programming language, leveraging multiple libraries and Application Programming Interfaces (APIs) to handle aspects such as database, real-time communication, AI models, weather forecast data, and language translation. The application's AI model is a stacked ensemble of cutting-edge deep learning architectures. Technical performance testing was performed using GTmetrix metrics, and the results were remarkable. The WebQual4.0 framework was used to evaluate the application's usability, information quality and service interaction quality. The Cronbach’s alpha (α) reliability measure was applied to assess internal consistency for WebQual4.0, which yielded an acceptable reliability score of 0.6809. The usability analysis showed that users perceived the AI-driven web application as intuitive, with high scores computed for navigation and ease of use. The quality of information was rated positive with users appreciating the reliability and relevance of the maize disease detection results of the AI application. The service interaction indicated potential for enhancement, which is a solicitude also highlighted in qualitative user feed-back that will be considered for future improvement. The study findings generally indicated that our AI application has great potential to improve agricultural practices by providing early maize disease diagnostics and decisive information to aid maize farmers and enhance maize yields.
Authors - Abdulrahman S. Alenizi, Khamis A. Al-Karawi Abstract - The liver is a vital organ responsible for numerous physiological functions in the human body. In recent years, the prevalence of liver diseases has risen significantly worldwide, mainly due to unhealthy lifestyle choices and excessive alcohol use. This illness is made worse by several hepatotoxic reasons. Obesity is the root cause of chronic liver disease. Obesity, undiagnosed viral hepatitis infections, alcohol consumption, increased risk of hemoptysis or hematemesis, renal or hepatic failure, jaundice, hepatic encephalopathy, and many other conditions can all contribute to chronic liver disease. Using machine learning for illness identification, hepatitis, an infection inflating liver tissue, has been thoroughly investigated. Numerous models are employed to diagnose illnesses, but limited research focuses on the connections between hepatitis symptoms. This research intends to examine chronic liver disease through machine learning predictions. It assesses the efficacy of multiple algorithms, including Logistic Regression, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbours (K-NN), and Decision Tree, by quantifying their accuracy, precision, recall, and F1 score. Experiments were performed on the dataset utilising these classifiers to evaluate their efficacy. The findings demonstrate that the Random Forest method attains the highest accuracy at 87.76%, surpassing other models in disease prediction. It also demonstrates superiority in precision, memory, and F1 score. Consequently, the study concludes that the Random Forest model is the most effective for predicting liver disease in its early stages.
Authors - Phuong Thao Nguyen Abstract - In recent years, the application of machine learning (ML) in anomaly detection for auditing and financial error detection has garnered significant attention. Traditional auditing methods, often reliant on manual inspection, face challenges in accuracy and efficiency, especially when handling large datasets. This study explores the integration of ML techniques to enhance the detection of anomalies in financial data specific to Thai Nguyen Province, Vietnam. We evaluate multiple ML algorithms, including supervised models (logistic regression, support vector machines) and unsupervised models (k-means clustering, isolation forest, autoencoders), to identify unusual patterns and potential financial discrepancies. Using financial records and audit reports from Thai Nguyen, the models were trained and tested to assess their accuracy, precision, and robustness. Our findings demonstrate that ML models can effectively detect anomalies and improve error identification compared to traditional methods. This paper provides practical insights and applications for local auditors, highlighting ML’s potential to strengthen financial oversight and fraud prevention within Thai Nguyen. Future research directions are also proposed to enhance model interpretability and address unique challenges in Vietnamese financial contexts.
Authors - Aman Mussa, Madina Mansurova Abstract - The rapid advancement of neural networks has revolutionized multiple domains, as evidenced by the 2024 Nobel Prizes in Physics and Chemistry, both awarded for contributions to neural networks. Large language models (LLMs), such as ChatGPT, have significantly reshaped AI interactions, gaining unprecedented growth and recognition. However, these models still face substantial challenges with low-resource languages like Kazakh, which accounts for less than 0.1% of online content. The scarcity of training data often results in unstable and inaccurate outputs. To address this issue, we present a novel Kazakh language dataset specifically designed for self-instruct fine-tuning of LLMs, comprising 50,000 diverse instructions from internet sources and textbooks. Using Low-Rank Adaptation (LoRa), a parameter-efficient fine-tuning technique, we successfully fine-tuned the LLaMA 2 model on this dataset. Experimental results demonstrate improvements in the model’s ability to comprehend and generate Kazakh text, despite the absence of established benchmarks. This research underscores the potential of large-scale models to bridge the performance gap in low-resource languages and highlights the importance of curated datasets in advancing AI-driven technologies for underrepresented linguistic communities. Future work will focus on developing robust benchmarking standards to further evaluate and enhance these models.
Authors - Eduardo Puraivan, Pablo Ormeno-Arriagada, Steffanie Kloss, Connie Cofre-Morales Abstract - We are in the information age, but also in the era of disinformation, with millions of fake news items circulating daily. Various fields are working to identify and understand fake news. We focus on hybrid approaches combining machine learning and natural language processing, using surface linguistic features, which are independent of language and enable a multilingual approach. Many studies rely on binary classification, overlooking multiclass problems and class imbalance, often focusing only on English. We propose a methodology that applies surface linguistic features for multiclass fake news detection in a multilingual context. Experiments were conducted on two datasets, LIAR (English) and CLNews (Spanish), both imbalanced. Using Synthetic Minority Oversampling Technique (SMOTE), Random Oversampling (ROS), and Random Undersampling (RUS), we observed improved class detection. For example, in LIAR, the classification of the ‘false’ class improved by 43.38% using SMOTE with Adaptive Boosting. In CLNews, the ROS technique with Random Forest raised accuracy to 95%, representing a 158% relative improvement over the unbalanced scenario. These results highlight our approach’s effectiveness in addressing the problem of multiclass fake news detection in an imbalanced, multilingual context.
Authors - Timi Heino, Sampsa Rauti, Sammani Rajapaksha, Panu Puhtila Abstract - Today, web analytics services are widely used on modern websites. While their main selling point is to improve the user experience and return of investment, de facto it is to increase the profits of third-party service providers through the access to the harvested data. In this paper, we present the current state-of-the-art research on the use of web analytics tools, and what kind of privacy threats these applications pose for the website users. Our study was conducted as a literature review, where we focused on papers that described third-party analytics in detail and which discussed their relation to user privacy and the privacy challenges they pose. We focused specifically on papers dealing with the practical third-party analytics tools, such as Google Analytics or CrazyEgg. We review the application areas, purposes of use, and data items collected by web analytics tools, as well as privacy risks mentioned in the literature. Our results show that web analytics tools are used in ways which severely compromise user privacy in many areas. Practices such as collecting a wide variety of unnecessary data items, storing data for extended periods of time without a good reason and not informing users appropriately are common. In this study, we also give some recommendations to alleviate the situation.
Authors - Abigail Gonzalez-Arriagada, Ruben Lopez-Leiva, Connie Cofre-Morales, Eduardo Puraivan Abstract - The rapid advancement of information and communication technologies (ICT) has created a significant digital divide between older adults and younger generations. This divide affects the autonomy of older adults in a digitalized world. To address this issue, various initiatives have attempted to promote their digital skills, which requires reliable tools to measure them. However, assessing these competencies in this age group presents complex challenges, such as developing scales that accurately reflect the dimensions involved. In this study, we present empirical evidence on the reliability and adaptation of the Assessment of Computer-Related Skills (ACRS) scale. We translated the instrument into Spanish and added descriptors to optimize its application. The evaluation included 54 older adults in Chile (39 women and 15 men, aged 55 to 80) in an environment designed for individualized observation during the performance of specific digital tasks. The analyses revealed that the five dimensions of the instrument have high reliability, with Cronbach’s alpha values between 0.959 and 0.968. Six items were identified whose removal could slightly improve this indicator. Overall, the scale shows excellent internal consistency, with a G6 coefficient of 0.9994. These results confirm that, both at the level of each dimension and as a whole, the instrument demonstrates strong internal consistency, reinforcing its utility for assessing the intended competencies. An additional contribution of this work is the public availability of the data obtained, with the aim of encouraging future research in this area. Given the nature of the scale, which allows for the assessment of skills across various computer-related tasks, evidence of its high internal reliability constitutes a valuable resource for designing more inclusive educational programs specifically tailored to the needs of older adults in digital environments.
Authors - Svetlin Stefanov, Malinka Ivanova Abstract - The advent of new technologies leads to a complexity of the cyber-crime landscape and scenes, which requires an adequate response from digital forensic investigators. To support their forensic activities, a number of models and methodologies have been developed, such as the methodology Digital Forensics Investigation from Practical Point of View DFIP, proposed by us in a previous work. In addition, there is an urgent need for a virtual environment that would organize and manage the activities of investigators related to communication, document exchange, preparation of computer expertise, teamwork, information delivery and training. In this context, a software system implementing the DFIP methodology has been developed, and the aim of the paper is to present the results of a study regarding the opinion and attitudes of forensic experts on the usefulness and role of the software system during the different phases of digital forensic investigation.
Authors - Timi Heino, Robin Carlsson, Panu Puhtila, Sammani Rajapaksha, Henna Lohi, Sampsa Rauti Abstract - Electronics is one of the most popular product categories among consumers online. In this paper, we conduct a study on the thirdparty data leaks occurring in the websites of the most online electronics stores used by Finnish residents, as well as the amounts of third parties present at these websites. We studied the leaks by recording and analyzing the network traffic happening from the website while conducting actions at the website that the normal user does when purchasing the product. We also analyze dark patterns found in these websites’ cookie consent banners. Our results show that in 80% of the cases, the product name, product ID and price were leaked to third parties along with the data identifying the user. Almost all of the inspected websites used dark patterns in their cookie consent banners, and privacy policies often had severe deficiencies in informing the user of the extent of data collection.
Authors - Luis E. Quito-Calle, Maria E. Barros-Ponton, Dalila M. Gonzalez-Gonzalez, Luis F. Guerrero-Vasquez, Jessica V. Quito-Calle Abstract - The confinement of families, whether due to health emergencies or other quarantines, has caused lifestyle changes to cause changes in the behavior of population and cause stress among its members when facing confinement. Present study aimed to determine if there is an association between the lifestyles and parents’ coping with stress due to confinement due to the Health Emergency or quarantine due to COVID- 19. This study methodology was quantitative, descriptive, correlational and cross-sectional. Participants were made up of 75 representatives of Bilingual Educational Institute "Home and School" INEBHYE. Instruments used were Lifestyle Profile Questionnaire (PEPS-I, in Spanish) and Stress Coping Questionnaire (CAE, in Spanish) with which it was obtained as a result that a healthy lifestyle predominates because families have been facing their stress under problem solving, positive reassessment and religion in the face of confinement. As a conclusion, it is obtained that there is a statistically significant association between the subscales of coping with stress and families lifestyle, which would imply a change in lifestyle to face the stress caused by confinement due to COVID-19.
Authors - Vicente A. Pitogo, Cristopher C. Abalorio, Rolyn C. Daguil, Ryan O. Cuarez, Sandra T. Solis, Rex G. Parro Abstract - The agricultural resources in the Philippines are essential for national food security and economic development with coffee being at its center. Moreover, recent data released by the Philippine Statistics Authority (PSA) show an increase in coffee production although there has been a worrying decline in pro-duction in Caraga region which grows over two thousand five hundred growers and has a huge area of land planted to coffee. The FarmVista project addressed this challenge through a data-driven approach by applying Principal Component Analysis (PCA) and various machine learning algorithms to classify and analyze coffee yield in Caraga. The study utilized a comprehensive dataset, the Coffee Farmers Enumerated Data, encompassing socio-demographic details, farming practices, and other influential factors. Gradient Boosting achieved the highest accuracy of 98.69%, with Random Forest closely following at 95.63%. These results highlight the effectiveness of advanced analytics and machine learning in improving coffee yield classification. By uncovering key patterns and factors affecting yield quality, this study provides valuable insights to optimize the coffee value chain in Caraga and addresses the region’s production challenges.
Authors - Bruno Zaninotto, Carlos Eduardo Barbosa, Alice Fonseca Monteiro, Lucas Nobrega, Luiz Felipe Martinez, Matheus Argolo, Geraldo Xexeo, Jano Moreira de Souza Abstract - The dynamic between buyers and sellers in the retail sector often leads to conflicts, necessitating a deeper understanding of customer complaints. The Internet is where customers can voice their opinions to influence purchasing decisions and shape company reputations. Brazil, recognized among the top 10 countries with the highest expectations for e-commerce growth worldwide in 2022, demonstrates a rapidly expanding market ready for exploration. This study addresses the problem by applying Latent Semantic Analysis (LSA) to analyze complaints about Americanas S.A., a large retail company on the Reclame Aqui platform, using the company as a case study for broader methodological application. Our findings reveal significant uniformity in complaints across Brazil, primarily concerning order processing, delivery, and product quality. These insights offer actionable intelligence for retailers to refine their Customer Relationship Management strategies and for the government to strengthen consumer protection policies, demonstrating the utility of LSA in improving customer satisfaction and trust in the retail landscape.
Authors - Islam Oshallah, Mohamed Basem, Ali Hamdi, Ammar Mohammed Abstract - Question answering systems face critical limitations in languages with limited resources and scarce data, making the development of robust models especially challenging. The Quranic QA system holds significant importance as it facilitates a deeper understanding of the Quran, a Holy text for over a billion people worldwide. However, these systems face unique challenges, including the linguistic disparity between questions written in Modern Standard Arabic and answers found in Quranic verses written in Classical Arabic, and the small size of existing datasets, which further restricts model performance. To address these challenges, we adopt a cross-language approach by (1) Dataset Augmentation: expanding and enriching the dataset through machine translation to convert Arabic questions into English, paraphrasing questions to create linguistic diversity, and retrieving answers from an English translation of the Quran to align with multilingual training requirements; and (2) Language Model Fine-Tuning: utilizing pre-trained models such as BERT-Medium, RoBERTa-Base, DeBERTa-v3-Base, ELECTRA-Large, Flan-T5, Bloom, and Falcon to address the specific requirements of Quranic QA. Experimental results demonstrate that this cross-language approach significantly improves model performance, with RoBERTa-Base achieving the highest MAP@10 (0.34) and MRR (0.52), while DeBERTa-v3-Base excels in Recall@10 (0.50) and Precision@10 (0.24). These findings underscore the effectiveness of cross-language strategies in overcoming linguistic barriers and advancing Quranic QA systems.
Authors - Aziza Irmatova, Mukhabbatkhon Mirzakarimova, Dilafruz Iskandarova, Guli-ra'no Abdumalikova Abstract - In the today, the development of digital education is playing an important role in radically changing the education system and making learning processes more innovative, interactive and convenient. In particular, digital platforms are the main tools that can change the educational process. Through these platforms, students have the opportunity to study lessons anywhere and at any time, without being limited to traditional classrooms. From this point of view, the development and implementation of digital educational platforms in educational institutions is one of the urgent issues, and the success of this process largely depends on the Internet coverage in the country, investments in digital infrastructure, and the impact of government policy. This article empirically analyzes the impact of Internet coverage, investments in digital infrastructure, and government policy on the implementation of digital educational platforms in Uzbekistan. The measurement of government policy was carried out by assessing the public's assessment of government policy.
Authors - Baraa Hikal, Ahmed Nasreldin, Ali Hamdi, Ammar Mohammed Abstract - Hallucination detection in text generation remains an ongoing struggle for natural language processing (NLP) systems, frequently resulting in unreliable outputs in applications such as machine translation and definition modeling. Existing methods struggle with data scarcity and the limitations of unlabeled datasets, as highlighted by the SHROOM shared task at SemEval-2024. In this work, we propose a novel framework to address these challenges, introducing DeepSeek Few-shot Optimization to enhance weak label generation through iterative prompt engineering. We achieved high-quality annotations that considerably enhanced the performance of downstream models by restructuring data to align with instruct generative models. We further fine-tuned the Mistral-7B-Instruct-v0.3 model on these optimized annotations, enabling it to accurately detect hallucinations in resource-limited settings. Combining this fine-tuned model with ensemble learning strategies, our approach achieved 85.5% accuracy on the test set, setting a new benchmark for the SHROOM task. This study demonstrates the effectiveness of data restructuring, few-shot optimization, and fine-tuning in building scalable and robust hallucination detection frameworks for resource-constrained NLP systems.
Authors - Youssef Maklad, Fares Wael, Wael Elsersy, Ali Hamdi Abstract - This paper presents a novel approach to evaluate the efficiency of a RAG-based agentic Large Language Model (LLM) architecture in network packet seed generation for network protocol fuzzing. Enhanced by chain-of-thought (COT) prompting techniques, the proposed approach focuses on the improvement of the seeds’ structural quality in order to guide protocol fuzzing frameworks through a wide exploration of the protocol state space. Our method leverages RAG and text embeddings in a two-stages. In the first stage, the agent dynamically refers to the Request For Comments (RFC) documents knowledge base for answering queries regarding the protocol Finite State Machine (FSM), then it iteratively reasons through the retrieved knowledge, for output refinement and proper seed placement. In the second stage, we evaluate the response structure quality of the agent’s output, based on metrics as BLEU, ROUGE, and Word Error Rate (WER) by comparing the generated packets against the ground truth packets. Our experiments demonstrate significant improvements of up to 18.19%, 14.81%, and 23.45% in BLEU, ROUGE, and WER, respectively, over baseline models. These results confirm the potential of such approach, improving LLM-based protocol fuzzing frameworks for the identification of hidden vulnerabilities.
Authors - Tushar Vasudev, Surbhi Ranga, Sahil Sankhyan, Praveen Kumar, K V Uday, Varun Dutt Abstract - To guarantee the safety and effectiveness of medical supplies like blood and vaccinations, careful environmental monitoring is necessary throughout transit. Even while real-time monitoring has advanced, current systems sometimes lack strong predictive ability to foresee unfavorable circumstances. The XGBoost Ensemble for Medical Supplies Transport (XEMST), a unique stacking ensemble model created to predict interior humidity levels during travel, is presented in this paper to fill this gap. By utilizing XGBoost's outstanding predictive fusion capabilities, the model incorporates predictions from fundamental machine learning methods, including Support Vector Machine, Random Forest, Decision Tree, and Linear Regression. XEMST outperformed individual models with a Root Mean Squared Error (RMSE) of 2.22% and an R2 score of 0.96 when tested across 17 different transit situations. By enabling prompt responses, these predictive insights protect medical supply quality from environmental hazards. This study demonstrates how sophisticated ensemble learning frameworks have the potential to transform intelligent healthcare logistics.
Authors - A B Sagar, K Ramesh Babu, Syed Usman, Deepak Chenthati, E Kiran Kumar, Boppana Balaiah, PSD Praveen, G Allen Pramod Abstract - Agricultural disasters, mostly ones caused by biological threats, pose severe threats to global food security and economic stability. Early detection and effective management are essential for mitigating these risks. In this research paper we propose a comprehensive disaster prediction and management framework integrating any of the resources like social networks or Internet of Things (IoT) for data collection. The model combines real-time data collection, risk assessment, and decision-making processes to forecast agricultural disasters and suggest mitigation strategies. The mathematical foundation of this model defines relationship between key variables, such as plant species, infestation agent species, tolerance levels, and infestation rates. The system relies on IoT or mobile-based social network agents for data collection at the ground level, to get precise and consistent information from diverse geographic regions. The model further includes a hierarchical risk assessment process that identifies, evaluates, and assesses risks based on predefined criteria, enabling informed decision-making for disaster mitigation. Multiplant species and multi-infestation agent interactions are also considered to capture the complexities of agricultural systems. The proposed framework provides a scalable approach to predicting and managing agricultural disasters, particularly targeting biological threats. By incorporating real-time data and dynamic decision-making mechanisms, the model considerably improves the resilience of agricultural systems against both localized and large-scale threats.
Authors - Herrera Nelson, Paul Francisco Baldeon Egas, Gomez-Torres Estevan, Sancho Jaime Abstract - Quito, the capital of Ecuador, is the economic core of the country where commercial, administrative, and tourist activities are concentrated. With population growth, the city has undergone major transformations resulting in traffic congestion problems that affect health, cause delays in daily activities, and increase pollution levels among other inconveniences. Over time, important mobility initiatives have been implemented such as traffic control systems, monitoring, construction of peripheral roads, and the "peak and license plate" measure that restricts the use of vehicles during peak hours according to their license plate, a strategy also adopted in several Latin American countries. However, these actions have not been enough, and congestion continues to increase, causing discomfort to citizens. Given this situation, the implementation of a low-cost computer application has been proposed that allows identifying traffic situations in real time and making decisions to improve this problem using processed data from the social network Twitter and traffic records from the city of Quito.
Authors - Elissa Mollakuqe, Hasan Dag, Vesa Mollakuqe, Vesna Dimitrova Abstract - Groupoids are algebraic structures, which generalize groups by allowing partial symmetries, and are useful in various fields, including topology, category theory, and algebraic geometry. Understanding the variance explained by Principal Component Analysis (PCA) components and the correlations among variables within groupoids can provide valuable insights into their structures and relationships. This study aims to explore the use of PCA as a dimensionality reduction technique to understand the variance explained by different components in the context of groupoids. Additionally, we examine the interrelationships among variables through a color-coded correlation matrix, facilitating insights into the structure and dependencies within groupoid datasets. The findings contribute to the broader understanding of data representation and analysis in mathematical and computational frameworks.
Authors - Laurent BARTHELEMY Abstract - In 2024 [7], the author proposed a calculation of weather criteria for vessel boarding against the ladder of an offshore wind turbine, based on a regular wave. However international guidelines [2] prescribe that "95% waves pass with no slip above 300mm (or one ladder rung)". In order to meet such acceptability criteria, it becomes necessary to investigate boarding under a real state, which is an irregular wave. The findings meet the results from other publications [6] [7]. The outcome then is to propose boarding optimisation strategies, compared to present professional practises. The purpose is to achieve less gas emissions, by minimising fuel consumption.
Authors - Amro Saleh, Nailah Al-Madi Abstract - Machine learning (ML) enables valuable insights from data, but traditional ML approaches often require centralizing data, raising privacy and security concerns, especially in sensitive sectors like healthcare. Federated Learning (FL) offers a solution by allowing multiple clients to train models locally without sharing raw data, thus preserving privacy while enabling robust model training. This paper investigates using FL for classifying breast ultrasound images, a crucial task in breast cancer diagnosis. We apply a Convolutional Neural Network (CNN) classifier within an FL framework, evaluated through methods like FedAvg on platforms such as Flower and TensorFlow. The results show that FL achieves competitive accuracy compared to centralized models while ensuring data privacy, making it a promising approach for healthcare applications.
Authors - Ahmed D. Alharthi, Mohammed M. Tounsi Abstract - The Hajj pilgrimage represents one of the largest mass gatherings globally, posing substantial challenges in terms of health and safety management. Millions of pilgrims converge each year in Saudi Arabia to fulfil their religious obligations, underscoring the critical need to address the various health risks that may emerge during such a large-scale event. Health volunteering plays a pivotal role in delivering timely and high-quality medical services to pilgrims. This study introduces the Integrated Health Volunteering (IHV) framework, designed to enhance health and safety outcomes through an optimised, rapid response system. The IHV framework facilitates the coordinated deployment of healthcare professionals—including doctors, anaesthetists, pharmacists, and others—in critical medical emergencies such as cardiac arrest and severe haemorrhage. Central to this framework is the integration of advanced technologies, including Artificial Intelligence algorithms, to support health volunteers’ decision-making. The framework has been validated and subjected to accuracy assessments to ensure its efficacy in real-world situations, particularly in the context of mass gatherings like the Hajj.
Authors - Mariam Esmat, Mohamed Elgemeie, Mohamed Sokar, Heba Ali, Sahar Selim Abstract - This paper explores the relationship between deep learning approaches and the intricate nature of EEG signals, focusing on the development of a P300 brain speller. The study uses an underutilized dataset to explore the classification of EEG signals and distinguishing features of "target" and "non-target" signals. The data processing adhered to current literature standards, and various deep learning methods, including Recurrent Neural Networks, Artificial Neural Networks, Transformers, and Linear Discriminant Analysis, were employed to classify processed EEG signals into target and non-target categories. The classification performance was evaluated using the area under the curve (AUC) score and accuracy. This research lays a foundation for future advancements in understanding and utilizing the human brain in neuroscience and technology.
Authors - Angel Peredo, Hector Lugo, Christian Narcia-Macias, Jose Espinoza, Daniel Masamba, Adan Gandarilla, Erik Enriquez, Dong-Chul Kim Abstract - This paper explores the under-examined potential of offline reinforcement learning algorithms in the context of Smart Grids. While online methods, such as Proximal Policy Optimization (PPO), have been extensively studied, offline methods, which inherently avoid real-time interactions, may offer practical safety benefits in scenarios like power grid management, where suboptimal policies could lead to severe consequences. To investigate this, we conducted experiments in Grid2Op environments with varying grid complexity, including differences in size and topology. Our results suggest that offline algorithms can achieve comparable or superior performance to online methods, particularly as grid complexity increases. Additionally, we observed that the diversity of training data plays a crucial role, with data collected through environment sampling yielding better results than data generated by trained models. These findings underscore the value of further exploring offline approaches in safety-critical applications.
Authors - Mohammed Sabiri, Bassou Aouijil Abstract - Let Rm = IFpr [v]= < vm - v >, where p is an odd prime, IFpr is a finite field with pr elements and vm = v. In this study, we investigate quantum codes over IFpr by using constacyclic codes over Rm, which are dual containing. Furthermore, by using cyclic codes over the ring Rm and their decomposition over the finite field IFpr into cyclic codes, a LCD codes are given as images of LCD codes over Rm.
Authors - Hector Lugo, Angel Peredo, Christian Narcia-Macias, Jose Espinoza, Daniel Masamba, Adan Gandarilla, Erik Enriquez, DongChul Kim Abstract - Cancer continues to be a major global health challenge, with high rates of morbidity and mortality. Traditional chemotherapy regimens often overlook individual patient variability, leading to suboptimal outcomes and significant side effects. This paper presents the application of Reinforcement Learning (RL) and Decision Transformers (DT) for developing personalized chemotherapy strategies. By leveraging offline data and simulated environments, our approach dynamically adjusts dosing strategies based on patient responses, optimizing therapeutic efficacy while minimizing toxicity. Experimental results show that DTs outperform both traditional Constant Dose Regimens (CDR) and online training methods like Proximal Policy Optimization (PPO), leading to improved survival times and reduced mortality. Our findings highlight the potential of RL and DTs to revolutionize cancer treatment by offering more effective and personalized therapeutic options.
Authors - Sharmila Rathod, Aryan Panchal, Krish Ramle, Ashlesha Padvi, Jash Panchal Abstract - Diabetes or Hyperglycemia, a condition where an individual is characterized by significantly elevated blood sugar levels, may pose a significant threat to the effective lifespan as well as may pose a significant risk for various cardiovascular diseases. Reliable and non-invasive monitoring of hyperglycemia and also hypoglycemia is important for timely intervention and prognosis. The paper presents an extensive and structured survey dealing with the non-invasive glucose monitoring and diabetes detection using machine learning and signal analysis techniques. The paper focuses on a comparative analysis approach which showcases the literature in tabular and diagrammatic form. Examination of 10 papers that deal with Photoplethysmography (PPG) and Electrocardiography (ECG) signals to detect glucose variations using machine learning techniques has been carried out. The review highlights the respective proposed system, unique findings, improvements, techniques, methods, future prospects, comparison with previous studies, feature importance and model evaluation as well as stated accuracy. This comprehensive analysis aims to provide insights into the methodologies in non-invasive glycemic conditions thereby contributing to the development of improved disease analysis.
Authors - Anastasia Vitvitskaya, Almaz Galimov Abstract - We are living in the age of digitalization, a time when the latest technologies are changing everything around us. Artificial intelligence and digitalization have affected all aspects of our life and society. It is important to realise that the Covid-19 pandemic accelerated the development of digital technologies. Technologies of augmented and virtual reality (AR/VR) are used in many fields, including education. Online platforms allowed people to work and study remotely from the comfort of their homes, which made the online format more popular. Now, informal online education and the use of generative artificial intelligence is actively developing, but it is crucial to understand the implications that the active use of artificial intelligence in education will have. The purpose of the study is to identify the tasks for which generative artificial intelligence is used. As a method of research, we used the collection and analysis of scientific literature, as well as the method of survey, in which 750 people answered for which purposes they use artificial intelligence. The article considers theoretical and practical aspects of generative artificial intelligence application, defines and classifies the tasks.
Authors - Vishnu Kumar Abstract - Cold chain logistics is the process of maintaining a controlled temperature throughout the storage and transportation of temperature-sensitive products. Ensuring the integrity of the cold chain is critical for the safety and efficacy of pharmaceutical (pharma) products. In the modern supply chain land-scape, the pharma industry involves many stakeholders, including Small and Medium-sized Enterprises (SMEs), which handle logistics, storage and retail operations. Despite the availability of advanced temperature monitoring technologies, SMEs face significant challenges in adopting these solutions due to economic constraints, limited technological resources, and lack of expertise. To bridge this gap, this work proposes a novel, cost-effective Internet of Things (IoT) based framework for real-time temperature monitoring in the cold chain of pharma products. Using a Raspberry Pi and Sense HAT module, coupled with a smartphone application, this system enables SMEs to implement an affordable and reliable cold chain monitoring solution. The capabilities of the proposed framework are demonstrated through a temperature monitoring case study, simulating the conditions faced in pharma supply chains. This work is expected to provide a practical resource for SMEs and suppliers seeking to im-prove their cold chain management without incurring excessive costs.
Authors - Simona Filipova-Petrakieva, Petar Matov, Milena Lazarova, Ina Taralova, Jean Jacques Loiseau Abstract - Plant disease detection plays a key role in modern agriculture, with significant implications for yield management and crop quality. This paper is a continuation of previous research by the authors' team related to the detection of pathologies on apple tree leaves. In order to eliminate the problem of overfitting in the traditional convolutional neural networks (CNNs) transfer learning layers are added to a residual neural network architecture ResNet50. The suggested model is based on pre-trained CNN whose weight coefficients are adapted until ResNet obtains the final classification. The model implementation uses Tensor- Flow and Keras frameworks and is developed in Jupyter Notebook environment. In addition, ImageDataGenerator is utilized for data augmentation and preprocessing to increase the classification accuracy of the proposed model. The model is trained using a dataset of 1821 high-resolution apple leaves images divided into four distinct classes: healthy, multiple diseases, rust, and scab. The experimental results demonstrate the effectiveness of the suggested ResNet architecture that outperforms other state-of-the art deep learning architectures in eliminating the overfitting problem. Identifying different apple leaves pathologies with the proposed model contributes to developing smart agricultural practices.
Authors - Malinka Ivanova, Svetlin Stefanov Abstract - The growing number and increasing complexity of cyberattacks require investigative experts to use contemporary technologies for finding and analyzing digital evidence and for preparing computer expertise. Artificial intelligence (AI) and machine learning (ML) are among the possibilities for automating a number of routine activities in digital forensics, which can be performed significantly faster and more efficiently. The aim of the paper is to present the potential of AI and ML at analyzing digital evidence as in this case the extraction of text and image information from pdf files is specifically examined. A classification of different types of files that could potentially be located on the victim’s or attacker’s smartphone or computer is also performed using ML algorithm Decision Tree. Synthetically generated files and original scientific papers are utilized for the experiments. The findings point out that the obtained accuracy at classification of file formats, at analyzing and summarizing the content of pdf files is high, which is done thought applying Natural Language Processing techniques and Large Language Models.
Authors - Enaam Youssef, Mahra Al Malek, Nagwa Babiker Yousif, Soumaya Abdellatif Abstract - Social media algorithms are important in suggesting content aligned with users' needs. The relevant technology suggests content and ensures its suitability and relevance to users. Consequently, it is considered an important aspect of everyday life in enhancing community and cultural identity among youth. This research examines the effect of social media algorithms on the community and cultural identity of the young generation in the United Arab Emirates. Theoretically supported by Social Identity Theory, this research gathered data from 341 respondents using structured questionnaires. Results indicated that Social Media Algorithms positively affect Community Identity, implying that these platforms promote a sense of belonging by connecting them to local groups, discussions, and events, strengthening their cultural and social community ties. Results also revealed that the effects of social media algorithms on cultural identity remain positively significant. These findings indicate that social media content improves connection to cultural heritage and shapes cultural identity perceptions, although algorithms sometimes prioritize global over local practices. Overall, these results indicate a robust influence of social media in the UAE as a factor enabling the young generation to seek community identity and cultural belonging, which further helps them retain their overall social identity in the best possible manner. Study findings and limitations are discussed accordingly.
Authors - Hayat Bihri, Soukaina Sraidi, Haggouni Jamal, Salma Azzouzi, My El Hassan Charaf Abstract - Predictive analytics and artificial intelligence (AI) offer significant potential to improve healthcare, yet challenges in achieving interoperability across diverse settings, such as long-term care and public health, remain. Enhancing Electronic Health Records (EHRs) with multimodal data provides a more comprehensive view of patient health, leading to better decision-making and patient outcomes. This study proposes a novel framework for real-time cardiovascular disease (CVD) risk prediction and monitoring by integrating medical imaging, clinical variables, and patient narratives from social media. Unlike traditional models that rely solely on structured clinical data, this approach incorporates unstructured insights, improving prediction accuracy and enabling continuous monitoring. The methodology includes modality specific preprocessing: sentiment analysis and Named Entity Recognition (NER) for patient narratives, Convolutional Neural Networks (CNNs)for imaging, and Min-Max scaling with k-Nearest Neighbors (k-NN) imputation for clinical variables. A unique patient identifier ensures precise data fusion through multimodal transformers, with attention mechanisms prioritizing key features. Real-time monitoring leverages streaming natural language processing (NLP) to detect health trends from social media, triggering alerts for healthcare providers. The model undergoes rigorous validation using metrics like AUC-ROC, AUC-PR, Brier score, SHAP values, expert re-views, and clinical indicators, ensuring robustness and relevance.
Authors - Quoc Hung NGUYEN, Xuan Dao NGUYEN THI, Thanh Trung LE, Lam NGUYEN THI Abstract - With the rapid development of financial technology, financial product recommendation systems play an increasingly important role in enhancing user experience and reducing information search costs, becoming a key factor in the financial services industry. Amid growing competitive pressure, the diversification of user needs, and the continuous expansion of financial products, traditional recommendation systems reveal limitations, especially in terms of accuracy and personalization. Therefore, this study focuses on applying deep learning technology to develop a smarter and more efficient financial product recommendation system. We evaluate this model based on key metrics such as precision, recall, and F1-score to ensure a comprehensive assessment of the proposed approach's effectiveness. Methodologically, we employ the Long Short-Term Memory (LSTM) model, a type of Recurrent Neural Network (RNN) designed to address the challenge of long-term memory retention in time-series data. For the task of recommending the next loan product for customers, LSTM demonstrates its ability to remember crucial information from the distant past, thanks to its gate structure, including input, forget, and output gates. Additionally, the model leverages a robust self-attention mechanism to analyze complex relationships between user behavior and financial product information.
Authors - Erick Verdugo, Andy Abad, Remigio Hurtado Abstract - Software defect prediction is crucial for reducing costs and improving quality. According to a Cutter Consortium report, software defects cause an estimated annual loss of $1.56 trillion in global productivity. Additionally, Tricentis reported that over 30% of software development projects failed due to undetected defects. Undetected defects can increase maintenance costs, delay deliveries, and compromise security, particularly in critical applications such as financial or medical systems. A significant challenge is dealing with imbalanced data, where there are more defect-free modules than defective ones, making detection difficult. This study proposes a four-phase approach: loading and transforming data, using balancing techniques, applying machine learning models, and explaining predictions. Techniques such as SMOTE, ADASYN, and RandomUnderSampling were used to balance the data, applied to models like Random Forest, Gradient Boosting, and SVM. The JM1 dataset, containing software quality metrics and 80% defect-free modules, was used for analysis. Data preprocessing involved imputation, encoding, and normalization. Results show that Random Forest and Gradient Boosting, combined with balancing techniques, achieved the best performance in defect identification. In the future, advanced algorithms such as XGBoost and LightGBM will be explored, and parameter optimization will be conducted to further enhance results. This approach aims to improve defect detection in software and to be applied in other fields.
Authors - Salma Mosaad Mohamed Elfeky, Mennaallah Nafady Ahmed Yehia, Ali Hamdi Abstract - This paper introduces a novel drone-based plant disease detection system optimized for efficient and scalable deployment using MLOps. Utilizing the CADI AI dataset for cashew crop disease classification, it includes automated workflows for iterative training, testing, and deployment across YOLO architectures (YOLOv5, YOLOv8, YOLOv9, and YOLOv10). Advanced data augmentation and incremental dataset expansion, growing from 757 training images to the full dataset, ensure fair evaluations and model optimization. YOLOv5 achieved a peak mAP@50 of 59.4%, followed by YOLOv8 with 50.1%. Iterative finetuning revealed YOLOv9’s superior insect detection performance (mAP@50: 70.9 %) and YOLOv10’s excellence in abiotic stress detection (mAP@50: 77.3%). This study highlights MLOps’ role in real-time model deployment and benchmarking, showcasing robust object detection capabilities and emphasizing iterative optimization and auto-deployment strategies to address dataset imbalance and enhance precision agriculture.
Authors - Ananya Deshpande, Akshay Angadi, Amulya H S, Adhokshaja R B, Nirmala Devi M Abstract - The increasing complexity of ICs and the reliance on external suppliers increase the risk of hardware Trojans, posing significant security threats. Traditional detection methods often fail due to limitations in addressing all potential vulnerabilities. This paper proposes a node compaction technique combined with an XGBoost classifier using features like Vulnerability Factor, Transition Probability, and SCOAP metrics to classify circuit nodes as Trojan-infected or Trojan-free. The compaction reduces execution time and improves real-time monitoring. The checker logic further validates the detection of Trojans by comparing the expected and observed functionality. Validation in TrustHub benchmark circuits demonstrates significant improvements in detection accuracy.
Authors - Kevin Lajpop Abstract - The Fast Fourier Transform (FFT) is a fundamental algorithm used in a wide range of applications, from signal processing to cryptography. With the increasing use of embedded and mobile devices, the need to optimize FFT performance has become crucial. This study focuses on the implementation of FFT on an ARM Cortex-A72 processor, leveraging NEON instructions, which are part of the SIMD (Single Instruction, Multiple Data) set. NEON instructions enable parallel operations, resulting in a significant improvement in execution times. Through a comparative analysis between implementations with and without NEON, a 99.99% reduction in execution time was demonstrated when using NEON, highlighting its effectiveness in applications that require high-speed processing, such as post-quantum cryptography.
Authors - Diego Loja, David Alvarado, Remigio Hurtado Abstract - Lung cancer, one of the leading causes of death worldwide, accounts for more than 2.2 million cases and nearly 1.8 million deaths. This type of cancer is classified into non-small cell lung carcinoma (NSCLC), the most common and slow-progressing type, and small cell lung carcinoma (SCLC), which is less common but highly aggressive [1]. In response to the urgency for rapid and accurate diagnosis, this work presents an innovative method for classifying PET images using the EfficientV2S model, combined with advanced data augmentation and normalization techniques. Unlike traditional methods, this approach incorporates visual explanations based on integrated gradients, enabling the justification of model predictions. The proposed method consists of three phases: data preprocessing, experimentation, and prediction explanation. The LUNGPETCT- DX dataset is utilized, comprising 133 patients distributed across three main classes: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The models are evaluated using quality metrics such as accuracy (78%), precision (82%), recall (78%), and F1-score (76%), highlighting the superior performance of EfficientV2S compared to other approaches. Additionally, integrated gradients are employed to visually justify predictions, providing critical interpretability in the medical context. For future work, the integration of CT images is proposed to enhance predictions, along with validation on larger datasets and optimization through fine-tuning, aiming to improve the model’s generalization and robustness
Authors - Ebin V Francis, T. Nirmala, Jiby Jose E. Abstract - This study investigates how media affected the career aspirations of College students in Kerala and finds a tendency associated with social media adjusting over conventional work. Using content analysis of digital media platforms, the research investigates how media content, trends, and narratives influence students’ perceptions of social media as a viable and desirable career path. This study seeks to determine what has changed over the last two decades, including whether peer effects, economic opportunities, or social acceptance are behind this shift. This study offers insights into how the career interests and preferences of the young generation in Kerala influenced by the media landscape can potentially impact the labour market and employment patterns among the youth in the region and aids in understanding the implications of media-influenced occupational aspirations and media patterns among the students in Kerala.