10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
Sign up or log in to bookmark your favorites and sync them to your phone or calendar.
Authors - Reena (Mahapatra) Lenka, Rajiv Divekar, Jaya Chitranshi Abstract - The performance measurement analysis system is suggested to overcome the problems various higher education sectors face in improving their performance. This performance measurement system satisfies all the requirements of scholars related to institute requirements. This system also satisfies the faculty’s needs as well as they can keep track of the student details, attendance, and marks, upload assignments, and have a fair idea regarding the students. Also, they can track how they can improve their performance. This system also satisfies the administration's need to keep students' records per the institutes' requirements. When followed and implemented in the higher education sector, this system would help improve the institute's performance to a greater extent, increasing its brand name.
Authors - Jaya Chitranshi, Rajiv Divekar, Reena (Mahapatra) Lenka Abstract - ‘Performance management’ should be the focus of any institution of ‘higher education’ to achieve its sublime objective of educating, training and steering mature minds. The purpose of transforming students into high-performance individuals can be achieved completely, when there are certain checkpoints and steps to the process of ‘‘performance management’’. The paper attempts to identify existing gaps in literature with respect to ‘performance management’ in ‘higher education’. This research paper proposes a nine-step model of ‘performance management’ for increasing performance of students in ‘higher education’. These steps are (i.) Goal-setting, where the targets should be set for students for one term; (ii.) Coaching and guiding, that should be done to make students achieve the goals set; (iii) Performance measurement, that should be done to assess performance with respect to goals set; (iv) Mentoring, that should be done to help students explore their strengths/potential/chances/ opportunities of improving performance with respect to set goals; (v) Counselling, that should be done to help students identify areas where they still lag or where their potential is still not used; to improve performance with respect to specific goals set; (vi) Performance measurement, that should be done to assess performance again; (vii) Performance Aggregate for the goal(s), that should be measured with respect to a specific goal, and for all goals then combined; (viii) Reward/ Advisory, that should be decided based on the aggregate of performance; (ix) New Goal(s)/ Revising and recalibrating goals-Based on the reward/ advisory and the aggregate of performance, new goals can be set/goals can be revised or recalibrated for the next term. It is extremely essential that ‘continuous feedback’ be provided to students in ‘higher education’ institutions, so that students get a clear direction towards improving their performance.
Authors - Jaya Chitranshi, Rajiv Divekar, Reena (Mahapatra) Lenka Abstract - One of the main objectives of higher education is to assist students improve their academic performance. This objective can only be accomplished when students consistently receive tailored feedback on how to improve their performance levels. This research-paper focusses on the process and model of performance feedback communication in higher education. The process-flow of performance feedback communication illustrates the input received through student, faculty, feedback type, login and model user. On the basis of the inputs, feedback-reports (student report, faculty report, feedback-type report and model user report) can be created and user login details can be checked. The step-wise model of performance feedback communication in higher education provides continuous performance feedback to students in higher education through 4 important steps. Step 1) Monthly Feedback collection-360 Degree, Step 2) Matching with Expectations, Step 3) Continuous Feedback Communication, Step 3. (A) Positive Feedback, Step 3. (B) Constructive Feedback, Step 3. (C) Supplement: Active Listening, Step 4.A.(i) Positive Feedback-script, Step 4.A.(ii) Positive Feedback-Mode of Communication-In public, Step 4.B.(i) Constructive Feedback-script, Step 4.B.(ii) Constructive Feedback-Mode of Communication-In private.
Authors - Eng Bah Tee, Insu Song Abstract - Mobile educational games have arisen as a fascinating tool to teach difficult concepts in an interactive and engaging manner. Mobile educational game uses a game type like puzzle, strategy, role-playing and so forth to drive the education of learning content. Currently game type is something selected by the game designer or programmer. Previous research study has mentioned that game type and lesson content is a critical area that requires more research. At the moment, a teacher or game designer is not too sure what game type would be best to teach a lesson on Geography or Mathematics. In fact, it has been found in the previous study that game type does have a significant impact on learning outcome and experience. To capitalize on the research gap for game type, we have therefore embarked on Stage 2 of our study to use artificial intelligence (AI). A machine learning model is employed to predict the evaluation score of the game type of mobile educational game employed to teach a subject lesson and to recommend the best game type for teaching the lesson. We then proceeded to Stage 3 and evaluated the performance of the AI model by creating a test set of twenty games and twenty undergraduates were recruited at an Indonesian university to evaluate the games. The average score of all mobile games evaluations is above the average of 3.5, thus proving the hypothesis H1 set out for Stage 3.
Authors - Robert Kudelic Abstract - The world of scientific publishing is changing; the days of an old type of subscription-based earnings for publishers seem over, and we are entering a new era. It seems as if an ever-increasing number of journals from disparate publishers are going Gold, Open Access that is, yet have we rigorously ascertained the issue in its entirety, or are we touting the strengths and forgetting about constructive criticism and careful weighing of evidence? We will therefore present the current state of the art, in a compact review/bibliometrics style, of this more relevant than ever hot topic, including challenges and potential solutions that are most likely to be acceptable to all parties. Suggested solutions, as per the performed analysis, at least for the time being, represent an inclusive publishing environment where multiple publishing models are competing for a piece of the pie and thus inhibiting each other’s flaws. The performed analysis also shows that there seems to be a link between trends in scientific publishing and tumultuous world events, which in turn has a special significance for the publishing environment in the current world stage—implying that academy publishing has potentially now found itself at a tipping point of change.
Authors - Jaya Chitranshi, Rajiv Divekar, Reena (Mahapatra) Lenka Abstract - To satisfy its high purpose of shaping, developing, and directing adult minds, an institution of higher learning should have ‘performance management’ as its top priority. When there are clear phases and benchmarks in the ‘performance management’ process, the aim of transforming students into high-achieving individuals can be adequately achieved. The paper's objective is to identify any research gaps in ‘performance management’ in ‘higher education’ and make recommendations for future courses using Bibliometric Analysis.
Authors - Tajim Md. Niamat Ullah Akhund, Kenbu Teramoto Abstract - The demand for efficient human activity recognition systems has surged recently, driven by the need for intelligent monitoring in various environments such as smart homes and workplaces. This paper presents a novel approach to measuring human activeness using a single Passive Infrared (PIR) sensor, highlighting its simplicity, costeffectiveness, and privacy-conscious design. This paper introduces a novel one-dimensional modeling approach for measuring human activeness using a single Passive Infrared (PIR) sensor, incorporating the Laplace distribution to analyze movement patterns. We define an activeness index μ, quantifying average human activity over time, allowing for precise numerical assessment. Our method utilizes the sensor’s capabilities to gather data on human movement and generate numerical metrics of average activeness over time. The results demonstrate that this approach effectively captures human activity levels while minimizing equipment complexity. This work contributes to the growing field of human activity recognition by offering a practical solution that balances performance with user privacy and affordability.
Authors - Hasti Vakani, Mithil Mistry, Hardikkumar Jayswal, Nilesh Dubey, Nitika Sharma,Rohan Patel, Dipika Damodar Abstract - Obesity has become a significant global health concern due to its as-sociation with various non-communicable diseases. Traditional methods for obesity assessment, such as BMI, often fail to capture the complexity of the condition, highlighting the need for more accurate predictive tools. This research utilize the machine learning algorithms, including Random Forest, Gradient Boosting, Support Vector Machines, and Neural Networks, in a stacking ensemble model to predict obesity levels. Utilizing datasets from diverse populations, the model achieved a high accuracy of 96.69%. Key features such as BMI, age, and dietary habits were identified as critical predictors through Recursive Feature Elimination. The research findings demonstrate the potential of advanced data-driven techniques in providing personalized insights into obesity management and underscore the transformative role of machine learning in public health initiatives.
Authors - Kutub Thakur, Md Liakat Ali, Suzanna Schmeelk, Joan Debello, Denise Dragos Abstract - The escalating prevalence of obesity in young adults has become a pressing public health concern, requiring innovative risk prediction and intervention approaches. This paper examines the potential of combining traditional lifestyle factors with social media behavior to predict obesity risk in young adults while addressing ethical considerations related to data privacy and informed consent. By identifying the most predictive social media metrics associated with obesity risk, this research offers novel insights that could inform targeted prevention strategies. Through a mixed-methods approach, the study examines the associations between social media behavior, traditional lifestyle factors, and obesity risk while ensuring adherence to ethical guidelines and protecting individual privacy. The findings highlight the importance of integrating social media metrics into risk prediction models, offering new avenues for intervention and prevention efforts. This research provides a deeper understanding of the complex interplay between social media behavior, lifestyle factors, and obesity risk, emphasizing the need for multidisciplinary approaches to tackle this growing public health challenge.
Authors - Alisher Ikramov, Shakhnoza Mukhtarova, Raisa Trigulova, Dilnoza Alimova, Dilafruz Akhmedova Abstract - Hospital readmissions pose a significant burden on healthcare systems, especially for patients with type 2 diabetes mellitus (T2DM) and cardiovascular diseases. Early readmission risk prediction is crucial for improving patient outcomes and reducing costs. In this study, we develop a predictive model based on accessible clinical features to estimate the risk of future hospitalizations. Using data from 260 patients at the Republican Specialized Scientific and Practical Medical Center for Cardiology in Uzbekistan, we trained a Generalized Linear Model that achieved a ROC AUC of 0.898 on the test set.
Authors - Lucas V. Santos, Vitor B. Souza Abstract - Fog computing emerges as an innovative solution for edge data processing, proving to be particularly important in the context of the Internet of Things (IoT) by delivering low latency and high bandwidth at the cost of requiring a stable connection. One application that has greatly benefited from this concept is the use of Unmanned Aerial Vehicles (UAVs), also known as drones, for various applications requiring real-time communication between these devices and, potentially, a base station. This paper focuses on the use of UAVs, highlighting the connectivity challenges posed by the limitations of wireless communication technologies, such as Wi-Fi. To address these challenges, we propose a model based on deep reinforcement learning (DDQN), which helps drones make decisions on the best route between the origin and destination, balancing the minimization of travel time and the maximization of connectivity throughout the journey. Using a simulated environment where drones are trained to avoid disconnection areas, we found that the proposed model significantly improves connection stability in areas with limited coverage, albeit with an unavoidable increase in route distance. Comparisons with traditional routing methods demonstrate the advantages of our model.
Authors - Sacrificio Sithole Junior, Mohammad Gulam Lorgat Abstract - The increase in the number of university students has resulted in long queues and delays in services, both during orientation events and in resolving general queries. A service chatbot is an artificial intelligence tool designed to interact with users, answering frequently asked questions and assisting in solving problems in an automated and efficient manner. This study presents the development of a chatbot prototype for the Faculty of Engineering administrative office in Chimoio, at the Universidade Católica de Moçambique (UCM), aiming to optimise service delivery, reduce waiting times, and increase efficiency in resolving common issues. Using a mixed-method approach, the study involved direct observation and questionnaires administered to students to identify the main problems with traditional service. The chatbot's development was carried out in two phases: the first involved data collection and the identification of needs, while the second covered the implementation of the prototype. This chatbot can provide a viable and effective solution to the challenges faced, delivering faster and more efficient service, while freeing up human resources for more complex tasks.
Authors - Bella Holub, Viktor Kyrychenko, Dmytro Nikolaienko, Maryna Lendiel, Dmytro Shevchenko, Andrii Khomenko Abstract - The article discusses the informational and algorithmic support for an atmospheric air quality monitoring system. It describes the system's architecture and individual components, along with a logical data model and two approaches to calculating the air quality index. Research on the use of caching methods, pre-aggregation, and sorting is presented to improve the efficiency of processing large volumes of data (Big Data).
Authors - Nguyen Ngoc Tu, Phan Duy Hung, Vu Thu Diep Abstract - In today's Industry 4.0 era, information technology has penetrated every industry, making work easier, faster and helping businesses operate more effectively. The ultimate measure of a business's success is customer satisfaction and loyalty. This work aims to enhance customer care by automating the processing of customer feedback through the development of an automatic classification system using deep learning techniques, specifically the Long Short-Term Memory model. The system will automatically classify customer problems, thereby improving service quality and enhancing the company's image. The study used customer feedback data from our company's customer care system, including 41,886 comments from Vietnamese customers. The study proposes to use the LSTM model to process text data and solve the problem of imbalanced data to improve the accuracy and efficiency of the classification system. Test results of the models show that the highest accuracy is about 80%. The study also recommends improving data labeling and testing more advanced natural language processing techniques to achieve better performance in the future.
Authors - Pham Hong Duong, Phan Duy Hung, Vu Thu Diep Abstract - Text classification, is a very popular problem with various application in natural language processing (NLP). One of the core tasks performed in text classification is assigning labels or tags to units in the text data such as sentences, paragraphs, and documents by exploring the relation between words or even characters. There are many applications derive from text classification, namely Sentiment Analysis, Topic Classification, Spam Detection, Document Classification, and so on. The main object of analyzing is text data. It can come from various sources like a newspaper, a document, some text messages that people use on daily basis. Naturally, as one of the most important form of communication, text is an extremely rich source of data. However, due to its unstructured nature and highly dependence on the context of use, extracting insights from text can be very challenging and time-consuming. This study focuses on exploring the data and forming a classification model on some of the gaming application test sets. We approach the problem using some basic text analysis methods and performing text classification by applying a Deep Learning method – the Convolutional Neural Network model. The dataset is collected from the handwritten test sets for various in-game content by the Quality Assurance Engineers. The main label to be classified is the Priority of the test cases on a whole test set, and eventually, the priority will be used to choose which test case fall into the Regression Test set, specifically 4 types of Priority from highest to lowest label. Finally, we provide an analysis of the performance of deep learning models based on the evaluation metrics as well as comparing it with a self-built traditional Machine Learning model using Logistic Regression and testing against real test case input. From that, we expect to learn to improve the deep learning model and discuss the possible future directions.
Authors - Makhabbat Bakyt, Khuralay Moldamurat, Luigi La Spada, Sabyrzhan Atanov, Zhanserik Kadirbek, Farabi Yermekov Abstract - This paper presents a geographic information system for monitoring and forecasting the spread of forest fires based on intelligent processing of aerospace data from low-orbit vehicles (LOA). The system uses convolutional neural networks (CNN) for fire detection and recurrent neural networks (RNN) for fire spread forecasting. To ensure the security of high-speed data transmission from LOA, a quantum key distribution (QKD) system is implemented, providing virtually unbreakable encryption. Experimental results demonstrate a 30% improvement in fire detection efficiency compared to traditional methods. The paper also discusses the potential costs of implementing QKD and AI, as well as the steps required for practical implementation of QKD on a large scale, taking into account factors such as the influence of the atmosphere on quantum key distribution.
Authors - Hiep. L. Thi Abstract - This paper investigates robust control strategies for managing unmanned aerial vehicles (UAVs) and other systems in emergency situations. We explore the challenges associated with maintaining stability and performance under unforeseen and critical conditions, present current approaches to robust control, and propose new methodologies to enhance system resilience. The paper also discusses practical applications and future research directions in this vital area of control systems engineering.
Authors - Fisiwe Hlophe, Sara Saartjie Grobbelaar Abstract - By adhering to a systematic design approach informed by scientific and engineering principles, Advanced Frugal Innovations yield products that optimize resource utilization, enhancing environmental sustainability and achieving significant cost savings. Following the Joanna Briggs Institute (JBI) framework, this article presents a scoping review that explores the landscape of AFIs in agriculture in developing countries. The Bibliometrix software package was used to facilitate the analysis of the bibliometric data included in this study. This study discovered that AFIs are based on advanced engineering techniques facilitated by research and development and rigorous design. This allows them to be suitable for mass production and have a wide range of novelty. The significant cost savings allow AFIs to be competitive in all markets, not exclusive to lower-income markets. This study discovered that factors such as a suitable innovation ecosystem, user-centered design, availability of highly skilled labor, and technology development enable the generation and development of AFIs. In contrast, skills shortage, lack of cohesion, funding issues, regulatory issues, and market access are some of the hindrances to the development of AFIs. We propose a research agenda for a better understanding of the requirements for setting up innovation ecosystems in the agricultural context that will drive the development and wide adoption of AFIs.
Authors - Karen McCall, Bevi Chagnon Abstract - The advent of the Internet and digital content has underscored the need to ensure equal access to data tables for individuals with disabilities, particularly those who are blind. However, the conventional 'one size fits all’ solutions, akin to Alt Text, have proven inadequate in the face of the growing volume of complex digital data tables. This paper presents research findings that could significantly enhance the accessibility of complex data tables for all users. Past and current research typically focuses on two areas of digital table access: HTML and PDF, and simple rather than complex data tables [1] [2] [3] [4]. For those using screen readers, basic information about a data table is provided with two options. It is either a “uniform” or simple data table or a “non-uniform” complex data table, which can have potential accessibility barriers such as merged or split cells. This paper provides insight and the results of original research in the form of an end-user survey on the multifaceted accessibility barriers of complex data tables. The research highlights that any solution designed for those using screen readers can benefit everyone — regardless of their abilities — in understanding complex data tables. This inclusive approach not only underscores the value of our work to all users, making them feel included and valued, but also holds the promise of a more accessible digital future across all digital technologies and media formats and for all users.
Authors - Luz Norma Caisal, Mocha-Bonilla Julio A. Abstract - The so-called digital era in which we live together with Learning and Knowledge Technologies (LKT) have radically transformed the form and methods of teaching and learning, LKT are tools that have evolved digital teaching towards the creation of learning experiences personalized and meaningful. One of the application contexts focuses on the teaching of Physical Education, an area that presents a wide variety of strategies in the teaching-learning process, therefore, Physical Education is an area where various technological tools can be incorporated for teaching. and practice of Physical Education. We worked with a group of students belonging to the third year of the Unified General Baccalaureate, the sample was made up of 84 students, who are aged ±16 years, as an instrument a structured questionnaire with polytomous questions distributed in three sections was used, the processing and Data analysis was carried out using the IBM SPSS Statistics version 24 package. The results in the first section reflect that students feel satisfied or very satisfied when practicing physical education; In the second section, it could be assumed that 89% of the students, the vast majority, used, applied and improved their learning thanks to learning and knowledge technologies in the physical education teaching process; Finally, in the third section, the use of the most used technological tools such as Genially, Google Meet, Kahoot, Moodle platform, Prezi and Socrative is emphasized. It is concluded that in physical education the application of Kinovea in physical education processes is essential to improve movement human.
Authors - Robert, Tubagus Maulana Kusuma, Hustinawati, Sariffudin Madenda Abstract - The process of forming a good dataset is a very decisive step in the success of a facial expression recognition/classification system. This paper proposes 24 scenarios for the formation of facial expression datasets involving the Viola-Jones face detection algorithm, YCbCr and HSV color space conversion, Local Binary Pattern (LBP), and Local Monotonic pattern (LMP) feature extraction algorithms. The results of the 24 dataset scenarios were then formed into five dataset categories to be used as training datasets and testing of two Machine Learning calcification models, namely Support Vector Machine (SVM) and Convolutional Neural Network (CNN). The SVM classification model is designed using four different kernels: radial, linear, sigmoid, and polynomial basis functions. Meanwhile, the CNN classification model uses the MobileNetV2 architecture. From testing the five categories, the best accuracy result is 83.04% provided by the SVM classifier that uses the sigmoid kernel and a combined dataset of LBP and LMP features extracted to focus only on the facial area from the results of the Viola- Jones face detection algorithm. In addition, for the CNN classifier, the best accuracy was obtained at 82.14% by using the Y-grayscale dataset which also focuses only on the facial area but without the feature extraction process. The results of the best accuracy for the two classifiers show that the face detection stage plays an important role in the facial expression recognition/classification system. The LBP and LMP algorithms are good enough to use for feature extraction in forming datasets in the SVM classification model.
Authors - Toufik Mechouma, Ismail Biskri, Serge Robert Abstract - This paper introduces Syntax-Constraint-Aware BERT, a novel variant of BERT designed to inject syntactic knowledge into the attention mechanism using augmented Lagrange multipliers. The model employs syntactic dependencies as a form of ground truth to supervise the learning process of word representation, thereby ensuring that syntactic structure exerts an influence on the model’s word representations. The application of augmented Lagrangian optimisation enables the imposition of constraints on the attention mechanism, thereby facilitating the learning of syntactic relationships. This approach involves the augmentation of the standard BERT architecture through the modification of the prediction layer. The objective is to predict an adjacency matrix that encodes words’ syntactic relationships in place of the masked tokens. The results of our experiments demonstrate that the injection of syntactic knowledge leads to improved performance in comparison to BERT in terms of training time and also on AG News text classification as a downstream task. By combining the flexibility of deep learning with structured linguistic knowledge, we introduce a merge between bottomup and top-down approaches. Furthermore, Syntax-Constraint-Aware BERT enhances the interpretability and performance of Transformerbased models.
Authors - Anda Batraga, Tatjana Volkova, Jelena Salkovska, Liene Kaibe, Didzis Rutitis, Eduards Aksjonenko, Marta Kontina Abstract - As AI develops, it is becoming increasingly important in digital marketing processes. AI has become an essential part of the digital marketing world, enabling businesses to reach their customers faster and to improve their business operations by automating some of the simplest tasks. Through technology transfer, AI brings significant improvements offering new opportunities and creative approaches to achieving the goals of a digital marketing strategy. The aim of this study is to investigate and analyse the possibilities of using AI in digital marketing strategy in order to draw conclusions and make proposals on the possibilities of improving digital marketing strategy in Latvian companies using AI. The results show that the transfer of AI technology can provide companies with several advantages. The need for a well-thought-out technology transfer is emphasised by the experts in order to make the technology work and help the company achieve its goals.
Authors - Lerato Mashiloane, Khutso Lebea Abstract - Since the beginning of the internet, there has been a continuous effort to secure and encrypt data transmitted through web browsers. The acronym VPN stands for "Virtual Private Network", which refers to the capability of creating a secure network connection while using public networks. Commercial and defence organisations have also adopted virtual private networks because they offer secure connectivity at reduced costs. The research paper will discuss what VPNs are, their importance and the mechanics behind them to give users an understanding of their highest level of security. The paper will look further at factors to consider when choosing a VPN and the balance between security and performance.