10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Robert Johnson, Jing Jung Zhang, Fu Kuo Manchu, Silvio Simani Abstract - With a focus on fixing the common problems of imbalance and misalignment, this study introduces an artificial intelligence tool based on a state-of-the-art deep learning method that will enhance automatic condition monitoring and fault detection for mechanical processes. The main breakthrough is a trustworthy model for condition monitoring using artificial neural networks that extract feature vectors from signal data using frequency analysis. A high fault detection accuracy rate highlights the research accomplishment, proving its ability to establish new solutions also for predictive maintenance. This research considers the different working conditions of a mechanical process by analysing four separate operational classes, including balanced operation, horizontal vertical misalignments, unbalanced situations, and regular operation. The dataset studied in this work includes a wealth of information and was carefully calibrated for neural network training, which has also the potential to be employed in the development of maintenance procedures for mechanical plants. Finally, this study provides a significant step towards the goals of improved performance and unyielding safety requirements that industries are aiming for.
Authors - Nejjari Nada, Chafi Anas, Kammouri Alami Salaheddine Abstract - The handicraft sector holds crucial importance within the Moroccan economy, serving as a fundamental pillar that significantly contributes to the country's economic balance. This sector not only preserves cultural heritage but also provides employment opportunities and sustains local economies. Our study primarily revolves around an in-depth exploration of the artisanal universe, aiming to derive relevant recommendations to optimize its performance and enhance its competitive edge. By focusing on identifying gaps, challenges, and opportunities within the sector, our goal is to develop concrete improvement suggestions that can catalyze continuous development and growth. Through a comprehensive analysis, we seek to provide actionable insights that can improve efficiency, sustainability, and the overall impact of the handicraft sector on the Moroccan economy. This research aspires to support policymakers, stakeholders, and artisans themselves in fostering a thriving and resilient artisanal industry that contributes robustly to economic and social development.
Authors - Ubayd Bapoo, Clement N Nyirenda Abstract - This study evaluates the performance of Soft Actor Critic (SAC), Greedy Actor Critic (GAC), and Truncated Quantile Critics (TQC) in high-dimensional decision-making tasks using fully observable environments. The focus is on parametrized action (PA) spaces, eliminating the need for recurrent networks, with benchmarks Platformv0 and Goal-v0 testing discrete actions linked to continuous actionparameter spaces. Hyperparameter optimization was performed with Microsoft NNI, ensuring reproducibility by modifying the codebase for GAC and TQC. Results show that Parameterized Action Greedy Actor-Critic (PAGAC) outperformed other algorithms, achieving the fastest training times and highest returns across benchmarks, completing 5,000 episodes in 41:24 for the Platform game and 24:04 for the Robot Soccer Goal game. Its speed and stability provide clear advantages in complex action spaces. Compared to PASAC and PATQC, PAGAC demonstrated superior efficiency and reliability, making it ideal for tasks requiring rapid convergence and robust performance. Future work could explore hybrid strategies combining entropy-regularization with truncation-based methods to enhance stability and expand investigations into generalizability.
Authors - Amalia Mukhlas, Shahrinaz Ismail, Bazilah A. Talip, Jawahir Che Mustapha, Juliana Jaafar Abstract - The pharmaceutical industry’s significant influence on other sectors underscores the urgency of implementing sustainable systems. Technology offers invaluable tools for achieving this goal. This study examines the challenges faced by pharmaceutical websites to function in collaboration and governance, emphasizing the difficulties in providing accessible and relevant information. Using experiential observation, it highlights governance-related inefficiencies in website design. The proposed solutions focus on improving the website usability for transparency, accountability and social involvement in line with sustainable systems practices. The findings disclose the suboptimal design of many pharmaceutical websites, hindering collaboration with external parties, specifically academic and potentially impact the industry’s sustainable efforts.
Authors - Sarah Abdalrahman Al-Shqaqi, Mohammed Zayed, Kamal Al-Sabahi, Adnan Al-Mutawkkil Abstract - The development of the Internet of Things (IoT) in recent years has significantly contributed to a paradigm change in all aspects of life. IoT has rapidly gained traction in a short period of time across a variety of sectors, including business, healthcare, governance, infrastructure management, consumer services, and even defense. IoT has the ability to monitor systems through the delivery of consistent and precise information. In medical services it enables us to make decisions, surrounding technologies will play an essential contribution to providing healthcare to people in remote locations. The health centers gather data from areas where cholera has emerged or is suspected, which is then sent to the ministry of public health and population. The world health organization in Yemen analyzes the data using two systems (Edews and Ewars), but they lack the full capability for early detection of cholera, and do not utilize Internet of Things technology, which plays an important role in solving most health problems, including cholera. To address this issue, we proposed a framework consists of 6 layers and add parameters that helps early detection of cholera disease. In this study, IoT framework was used for early detection of cholera, thereby assisting the ministry of public health and population and the world health organization in making informed decisions by add the intelligent medical server layer. Overall, this framework is applicable to any field, particularly healthcare for cholera.
Authors - Praveen Kumar Sandanamudi, Neha Agrawal, Nikhil Tripathi, Pavan Kumar B N Abstract - Over the past few years, the proliferation of resource-constrained devices and the looming of Internet-of-Things (IoT) for UAV applications has accentuated the need for lightweight cryptographic (LWC) algorithms. These algorithms are designed to be more suitable for UAV application based IoT devices as they are efficient in terms of memory usage, computation, utilization of power, etc. Based on the literature study, the algorithms mostly suitable for the UAV application based resource-constrained devices are identified in this paper. This list also includes ASCON cipher, winner of the NIST’s lightweight cryptography standardization contest. Furthermore, these algorithms have been implemented on a UAV application based Raspberry Pi 3 Model B to analyze their hardware and software performance w.r.t the essential metrics, such as latency, power consumption, throughput, energy consumption, etc., for different payloads. From the experimental results, it has been observed that the SPECK is optimized for software implementations and may offer better performance in certain scenarios, especially on UAV application based resource-constrained devices. ASCON, on the other hand, provides both encryption and authentication in a single pass, potentially reducing latency and overhead. This paper aims to assist researchers in pinpointing the most appropriate LWC algorithm tailored to specific scenarios and requirements.
Authors - Mmapula Rampedi, Funmi Adebesin Abstract - The healthcare sector has generally been reluctant to adopt digital technologies. However, the COVID-19 pandemic pushed the industry to accelerate its digital transformation. Digital twins, a virtual replica of human organs or the entire human body, is revolutionizing healthcare and the management of healthcare resources. Digital twins can improve the accuracy of patients’ diagnoses through access to their virtual replica data. This enables healthcare professionals to make informed decisions about patients’ conditions and treatment options. This paper presents the results of a systematic literature review that investigated how digital twins are being utilized in the healthcare sector. A total of 6,714 papers published between 2019 and April 2024 were retrieved from four databases using specific search terms. A screening process based on inclusion and exclusion criteria resulted in a final set of 34 studies that were analyzed. The qualitative content analysis of the 34 studies resulted in the identification of five themes namely; (i) the technologies that are integrated into digital twins; (ii) the medical specialties where digital twins are being used; (iii) the different application areas of digital twins in healthcare; (iv) the benefits of the application of digital twins in healthcare and; (v) the challenges associated with the use of digital twins in healthcare. The outcome of the study showcased the potential for the adoption of digital twins to revolutionize healthcare service delivery by mapping the medical specialties of use to the different application areas. The study also highlights the benefits and challenges associated with the adoption of digital twins in the healthcare sector.
Authors - Nazli Tokatli, Mucahit Bayram, Hatice Ogur, Yusuf Kilic, Vesile Han, Kutay Can Batur, Halis Altun Abstract - This study aims to create deep learning models for the early identification and classification of brain tumours. Models like U-Net, DAU-Net, DAU-Net 3D, and SGANet have been used to evaluate brain MRI images accurately. Magnetic resonance imaging (MRI) is the most commonly used method in brain tumour diagnosis, but it is a complicated procedure due to the brain’s complex structure. This study looked into the ability of deep learning architectures to increase the accuracy of brain tumour diagnosis. We used the BraTS 2020 dataset to segment and classify brain tumours. The U-Net model designed for the project achieved an accuracy rate of 97% with a loss of 47%, DAU-Net reached 90% accuracy with a loss of 33%, DAU-Net 3D achieved 99% accuracy with a loss of 35%, and SGANet achieved 99% accuracy with a loss of 20%, all demonstrating effective outcomes. These findings aim to improve patient care quality by speeding up medical diagnosis processes using computer-aided technology. Doctors can detect 3D tumours from MRI pictures using software developed as part of the research. The work packages correctly handled project management throughout the study’s data collection, model creation, and evaluation stages. Regarding brain tumour segmentation, 3D U-Net architecture with multi-head attention mechanisms provides doctors with the best tools for planning surgery and giving each patient the best treatment options. The user-friendly Turkish interface enables simple MRI picture uploads and quick, understandable findings.
Authors - Radford Burger, Olawande Daramola Abstract - Clinical Decision Support Systems (CDSS) have the potential to significantly improve healthcare quality in resource-limited settings (RLS). Despite evidence supporting the effectiveness of CDSS, their adoption and implementation rates remain low in RLS due to low levels of computer literacy among health workers, fragmented and unreliable infrastructure, and technical challenges. A thorough understanding of requirements is critical for the design of CDSS, which will be relevant to RLS. This paper explores the elicitation and prioritisation of requirements of a CDSS tailored to gait-related diseases in RLS. To do this, we conducted a qualitative literature analysis to identify potential requirements. After that, the requirements were presented to gait analysis experts for revision and prioritisation using the MoSCoW requirements prioritisation technique. The analysis of the results of the prioritisation process shows that for the functional requirements, 59.1% are fundamental and essential (Must Have), 36.3% are important but not fundamental (Should Have), 4.5% are negotiable requirements that are nice-to-have, but not important or fundamental (Could Have). All the non-functional requirements (100%) that pertain to usability and security were considered fundamental and essential (Must Have). This study provides a solid foundation for understanding the requirements of CDSS that are tailored to gait-related diseases in RLS. It also provides a guide for software developers and re-searchers on the design choices regarding the development of CDSS for RLS.
Authors - Omar Ahmed Abdulkader, Bandar Ali Alrami Al Ghadmi, Muhammad Jawad Ikram Abstract - In an era characterized by escalating digital threats, cybersecurity has emerged as a paramount concern for individuals and organizations globally. Traditional security measures, often reliant on centralized systems, face significant challenges in combating increasingly sophisticated cyberattacks, leading to substantial data breaches, financial losses, and erosion of trust. This paper investigates the transformative potential of blockchain technology as a robust solution to enhance cybersecurity frameworks. By leveraging the core principles of blockchain—decentralization, transparency, and immutability—this study highlights how blockchain can address critical cybersecurity challenges. For instance, the use of blockchain for data integrity ensures that information remains unaltered and verifiable, significantly reducing the risk of tampering. Furthermore, decentralized identity management systems can provide enhanced security against identity theft and phishing attacks, allowing users to maintain control over their personal information. Through a review of current applications and case studies, this paper illustrates successful implementations of blockchain in various sectors, including finance, healthcare, and supply chain management. Notable results include a reported 30% reduction in fraud rates within financial transactions utilizing blockchain technology and a marked improvement in incident response times due to the transparency and traceability offered by blockchain solutions. Despite its promising applications, this paper also addresses existing challenges, such as scalability issues that can hinder transaction speed, regulatory concerns that complicate implementation, and technical complexities that require specialized knowledge. These barriers pose significant obstacles to the widespread adoption of blockchain in cybersecurity. In conclusion, this paper emphasizes the need for further research and development to overcome these challenges and optimize the integration of blockchain within cybersecurity frameworks. By doing so, we can foster a safer digital environment and enhance resilience against the evolving landscape of cyber threats.
Authors - Catia Silva, Nelson Zagalo, Mario Vairinhos Abstract - The preservation of cultural heritage, crucial for maintaining cultural identity, is increasingly threatened by natural degradation and socio-economic changes. Cultural tourism, supported by information and communication technologies, has become a key strategy for sustaining and promoting heritage sites. However, research on the most effective digital elements for amplifying tourist engagement remains limited. To address this gap, the present study explored the use of the Cultural Engagement Digital Model, which integrates participatory activities through game, narrative, and creativity elements, to enhance visitor engagement at cultural sites. The study focused on designing and testing three prototypes for Almeida, a historical village in in Guarda, Portugal, involving both visitors and interaction design experts to evaluate user preferences regarding the proposed activities. The findings of this study indicate that activities aligned with participatory dimensions can effectively engage users. These results help to solidify the model as a valuable instrument for designing mobile applications capable of promoting tourist engagement.
Authors - Juliana Silva, Pedro Reisinho, Rui Raposo, Oscar Ribeiro, Nelson Zagalo Abstract - As global life expectancy rises and the population of older adults in-creases, a higher prevalence of age-related diseases, such as dementia, is being observed. However, dementia-like symptoms are not exclusively caused by neurodegenerative conditions; pseudodementia, associated with late-life depression, can mimic the symptoms of dementia but may be potentially reversible with appropriate interventions. Despite this, individuals with pseudodementia still have a higher risk of progressing to neurodegenerative dementia. To counteract this possibility and aid in symptom reversal, non-pharmacological interventions may be a potential treatment. The present case study explored the feasibility of promoting storytelling through virtual reminiscence therapy in an older adult with pseudodementia, while also assessing the level of technological acceptance. The intervention included two sessions: one using a digital memory album and an-other utilizing 360º videos of personally significant locations. The results support the viability of using virtual reality as a therapeutic instrument to stimulate reminiscence and promote storytelling with a manageable learning curve and without inducing symptoms of cybersickness.
Authors - Franciskus Antonius Alijoyo, N Venkatramana, Omaia AlOmari, Shamim Ahmad Khan, B Kiran Bala Abstract - The Internet of Things (IoT) is becoming a crucial component of many industries, from smart cities to healthcare, in today's networked world. IoT devices are becoming more and more susceptible to security risks, especially zero-day (0day) attacks, which take advantage of undiscovered flaws. The dynamic and dispersed nature of these systems makes it difficult to identify and mitigate these assaults in IoT contexts. This research focuses on a deep learning model that was created and put into use with Python software. It was made especially to do a detection job with great accuracy. The proposed Autoencoder (AE) with Attention Mechanism model demonstrates exceptional performance in detecting zero-day attacks, achieving an accuracy of 99.45%, precision of 98.56%, recall of 98.53%, and an F1 score of 98.21%. The involvement of the attention mechanism helps to focus on the most relevant features, enhancing its efficiency and reducing computational overhead, making it a promising solution for real-time security applications in IoT systems. Compared to previous methods, such as STL+SVM and AE+DNN, the proposed model significantly outperforms the methods. These results highlight its superior ability to identify anomalies with minimal false positives. Because of its resilience, the model is very good at making zero-attacks. The results demonstrate how deep learning may improve IoT systems' security posture by offering proactive, real-time protections against zero-day threats, resulting in safer and more robust IoT environments.
Authors - Freedom Khubisa, Oludayo Olugbara Abstract - This paper presents the development and evaluation of an artificial intelligent (AI) driven web application for detecting maize diseases. The AI application was designed according to the design science methodology to offer accurate and real-time detection of maize diseases through a user-friendly interface. The application used Flask framework and Python programming language, leveraging multiple libraries and Application Programming Interfaces (APIs) to handle aspects such as database, real-time communication, AI models, weather forecast data, and language translation. The application's AI model is a stacked ensemble of cutting-edge deep learning architectures. Technical performance testing was performed using GTmetrix metrics, and the results were remarkable. The WebQual4.0 framework was used to evaluate the application's usability, information quality and service interaction quality. The Cronbach’s alpha (α) reliability measure was applied to assess internal consistency for WebQual4.0, which yielded an acceptable reliability score of 0.6809. The usability analysis showed that users perceived the AI-driven web application as intuitive, with high scores computed for navigation and ease of use. The quality of information was rated positive with users appreciating the reliability and relevance of the maize disease detection results of the AI application. The service interaction indicated potential for enhancement, which is a solicitude also highlighted in qualitative user feed-back that will be considered for future improvement. The study findings generally indicated that our AI application has great potential to improve agricultural practices by providing early maize disease diagnostics and decisive information to aid maize farmers and enhance maize yields.
Authors - Abdulrahman S. Alenizi, Khamis A. Al-Karawi Abstract - The liver is a vital organ responsible for numerous physiological functions in the human body. In recent years, the prevalence of liver diseases has risen significantly worldwide, mainly due to unhealthy lifestyle choices and excessive alcohol use. This illness is made worse by several hepatotoxic reasons. Obesity is the root cause of chronic liver disease. Obesity, undiagnosed viral hepatitis infections, alcohol consumption, increased risk of hemoptysis or hematemesis, renal or hepatic failure, jaundice, hepatic encephalopathy, and many other conditions can all contribute to chronic liver disease. Using machine learning for illness identification, hepatitis, an infection inflating liver tissue, has been thoroughly investigated. Numerous models are employed to diagnose illnesses, but limited research focuses on the connections between hepatitis symptoms. This research intends to examine chronic liver disease through machine learning predictions. It assesses the efficacy of multiple algorithms, including Logistic Regression, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbours (K-NN), and Decision Tree, by quantifying their accuracy, precision, recall, and F1 score. Experiments were performed on the dataset utilising these classifiers to evaluate their efficacy. The findings demonstrate that the Random Forest method attains the highest accuracy at 87.76%, surpassing other models in disease prediction. It also demonstrates superiority in precision, memory, and F1 score. Consequently, the study concludes that the Random Forest model is the most effective for predicting liver disease in its early stages.
Authors - Phuong Thao Nguyen Abstract - In recent years, the application of machine learning (ML) in anomaly detection for auditing and financial error detection has garnered significant attention. Traditional auditing methods, often reliant on manual inspection, face challenges in accuracy and efficiency, especially when handling large datasets. This study explores the integration of ML techniques to enhance the detection of anomalies in financial data specific to Thai Nguyen Province, Vietnam. We evaluate multiple ML algorithms, including supervised models (logistic regression, support vector machines) and unsupervised models (k-means clustering, isolation forest, autoencoders), to identify unusual patterns and potential financial discrepancies. Using financial records and audit reports from Thai Nguyen, the models were trained and tested to assess their accuracy, precision, and robustness. Our findings demonstrate that ML models can effectively detect anomalies and improve error identification compared to traditional methods. This paper provides practical insights and applications for local auditors, highlighting ML’s potential to strengthen financial oversight and fraud prevention within Thai Nguyen. Future research directions are also proposed to enhance model interpretability and address unique challenges in Vietnamese financial contexts.
Authors - Aman Mussa, Madina Mansurova Abstract - The rapid advancement of neural networks has revolutionized multiple domains, as evidenced by the 2024 Nobel Prizes in Physics and Chemistry, both awarded for contributions to neural networks. Large language models (LLMs), such as ChatGPT, have significantly reshaped AI interactions, gaining unprecedented growth and recognition. However, these models still face substantial challenges with low-resource languages like Kazakh, which accounts for less than 0.1% of online content. The scarcity of training data often results in unstable and inaccurate outputs. To address this issue, we present a novel Kazakh language dataset specifically designed for self-instruct fine-tuning of LLMs, comprising 50,000 diverse instructions from internet sources and textbooks. Using Low-Rank Adaptation (LoRa), a parameter-efficient fine-tuning technique, we successfully fine-tuned the LLaMA 2 model on this dataset. Experimental results demonstrate improvements in the model’s ability to comprehend and generate Kazakh text, despite the absence of established benchmarks. This research underscores the potential of large-scale models to bridge the performance gap in low-resource languages and highlights the importance of curated datasets in advancing AI-driven technologies for underrepresented linguistic communities. Future work will focus on developing robust benchmarking standards to further evaluate and enhance these models.
Authors - Eduardo Puraivan, Pablo Ormeno-Arriagada, Steffanie Kloss, Connie Cofre-Morales Abstract - We are in the information age, but also in the era of disinformation, with millions of fake news items circulating daily. Various fields are working to identify and understand fake news. We focus on hybrid approaches combining machine learning and natural language processing, using surface linguistic features, which are independent of language and enable a multilingual approach. Many studies rely on binary classification, overlooking multiclass problems and class imbalance, often focusing only on English. We propose a methodology that applies surface linguistic features for multiclass fake news detection in a multilingual context. Experiments were conducted on two datasets, LIAR (English) and CLNews (Spanish), both imbalanced. Using Synthetic Minority Oversampling Technique (SMOTE), Random Oversampling (ROS), and Random Undersampling (RUS), we observed improved class detection. For example, in LIAR, the classification of the ‘false’ class improved by 43.38% using SMOTE with Adaptive Boosting. In CLNews, the ROS technique with Random Forest raised accuracy to 95%, representing a 158% relative improvement over the unbalanced scenario. These results highlight our approach’s effectiveness in addressing the problem of multiclass fake news detection in an imbalanced, multilingual context.
Authors - Mariam Esmat, Mohamed Elgemeie, Mohamed Sokar, Heba Ali, Sahar Selim Abstract - This paper explores the relationship between deep learning approaches and the intricate nature of EEG signals, focusing on the development of a P300 brain speller. The study uses an underutilized dataset to explore the classification of EEG signals and distinguishing features of "target" and "non-target" signals. The data processing adhered to current literature standards, and various deep learning methods, including Recurrent Neural Networks, Artificial Neural Networks, Transformers, and Linear Discriminant Analysis, were employed to classify processed EEG signals into target and non-target categories. The classification performance was evaluated using the area under the curve (AUC) score and accuracy. This research lays a foundation for future advancements in understanding and utilizing the human brain in neuroscience and technology.
Authors - Angel Peredo, Hector Lugo, Christian Narcia-Macias, Jose Espinoza, Daniel Masamba, Adan Gandarilla, Erik Enriquez, Dong-Chul Kim Abstract - This paper explores the under-examined potential of offline reinforcement learning algorithms in the context of Smart Grids. While online methods, such as Proximal Policy Optimization (PPO), have been extensively studied, offline methods, which inherently avoid real-time interactions, may offer practical safety benefits in scenarios like power grid management, where suboptimal policies could lead to severe consequences. To investigate this, we conducted experiments in Grid2Op environments with varying grid complexity, including differences in size and topology. Our results suggest that offline algorithms can achieve comparable or superior performance to online methods, particularly as grid complexity increases. Additionally, we observed that the diversity of training data plays a crucial role, with data collected through environment sampling yielding better results than data generated by trained models. These findings underscore the value of further exploring offline approaches in safety-critical applications.
Authors - Mohammed Sabiri, Bassou Aouijil Abstract - Let Rm = IFpr [v]= < vm - v >, where p is an odd prime, IFpr is a finite field with pr elements and vm = v. In this study, we investigate quantum codes over IFpr by using constacyclic codes over Rm, which are dual containing. Furthermore, by using cyclic codes over the ring Rm and their decomposition over the finite field IFpr into cyclic codes, a LCD codes are given as images of LCD codes over Rm.
Authors - Hector Lugo, Angel Peredo, Christian Narcia-Macias, Jose Espinoza, Daniel Masamba, Adan Gandarilla, Erik Enriquez, DongChul Kim Abstract - Cancer continues to be a major global health challenge, with high rates of morbidity and mortality. Traditional chemotherapy regimens often overlook individual patient variability, leading to suboptimal outcomes and significant side effects. This paper presents the application of Reinforcement Learning (RL) and Decision Transformers (DT) for developing personalized chemotherapy strategies. By leveraging offline data and simulated environments, our approach dynamically adjusts dosing strategies based on patient responses, optimizing therapeutic efficacy while minimizing toxicity. Experimental results show that DTs outperform both traditional Constant Dose Regimens (CDR) and online training methods like Proximal Policy Optimization (PPO), leading to improved survival times and reduced mortality. Our findings highlight the potential of RL and DTs to revolutionize cancer treatment by offering more effective and personalized therapeutic options.
Authors - Sharmila Rathod, Aryan Panchal, Krish Ramle, Ashlesha Padvi, Jash Panchal Abstract - Diabetes or Hyperglycemia, a condition where an individual is characterized by significantly elevated blood sugar levels, may pose a significant threat to the effective lifespan as well as may pose a significant risk for various cardiovascular diseases. Reliable and non-invasive monitoring of hyperglycemia and also hypoglycemia is important for timely intervention and prognosis. The paper presents an extensive and structured survey dealing with the non-invasive glucose monitoring and diabetes detection using machine learning and signal analysis techniques. The paper focuses on a comparative analysis approach which showcases the literature in tabular and diagrammatic form. Examination of 10 papers that deal with Photoplethysmography (PPG) and Electrocardiography (ECG) signals to detect glucose variations using machine learning techniques has been carried out. The review highlights the respective proposed system, unique findings, improvements, techniques, methods, future prospects, comparison with previous studies, feature importance and model evaluation as well as stated accuracy. This comprehensive analysis aims to provide insights into the methodologies in non-invasive glycemic conditions thereby contributing to the development of improved disease analysis.
Authors - Anastasia Vitvitskaya, Almaz Galimov Abstract - We are living in the age of digitalization, a time when the latest technologies are changing everything around us. Artificial intelligence and digitalization have affected all aspects of our life and society. It is important to realise that the Covid-19 pandemic accelerated the development of digital technologies. Technologies of augmented and virtual reality (AR/VR) are used in many fields, including education. Online platforms allowed people to work and study remotely from the comfort of their homes, which made the online format more popular. Now, informal online education and the use of generative artificial intelligence is actively developing, but it is crucial to understand the implications that the active use of artificial intelligence in education will have. The purpose of the study is to identify the tasks for which generative artificial intelligence is used. As a method of research, we used the collection and analysis of scientific literature, as well as the method of survey, in which 750 people answered for which purposes they use artificial intelligence. The article considers theoretical and practical aspects of generative artificial intelligence application, defines and classifies the tasks.