Loading…
10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
or to bookmark your favorites and sync them to your phone or calendar.
Type: Virtual Room_10B clear filter
Thursday, February 20
 

1:58pm GMT

Opening Remarks
Thursday February 20, 2025 1:58pm - 2:00pm GMT
Thursday February 20, 2025 1:58pm - 2:00pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

BRIDGING DIGITAL DIVIDE: A STUDY ON THE UMANG PLATFORM AND ITS IMPACT ON E-GOVERNANCE
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - Swastika Das
Abstract - This paper discusses the role of the UMANG platform in achieving the goal of addressing the digital divide in Indian e-governance. The UMANG platform aggregates nearly 2,000 services across sectors--health, education, and finance--onto a singular mobile-first platform for which it strives to make accessible, transparent, and efficient. Under the Digital India initiative, the UMANG platform was launched in 2017. Despite rapid digitalization in India, especially in cities, its rural pockets lag significantly in terms of internet usage penetration, marking only 37.3% in rural areas, respectively. The present research looks into how the same platform is trying to reduce that gap by providing services in 22 Indian languages, Assisted Mode for those without proper digital literacy, and real-time updates in the furtherance of tracking services. Citizen engagement in the right direction, UMANG has streamlined interactions, minimised bureaucratic delays, and created transparency. Problems, however, are still seated there-like limited digital literacy, security of data, and resistance from some government departments. Finally, the study concludes that with continuous integration enhancements in digital security and wider citizen participation, UMANG can transform governance in India, paving the way towards realising the vision of Digital India.
Paper Presenters
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

FlexiMind: Dyslexia Assessment and Aid Application for Specific Learning Disorders
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - D. I. De Silva, S.V. Sangkavi, W. M. K. H. Wije-sundara, L. G. A. T. D. Wijerathne, L. H. Jayawardhane
Abstract - This study introduces FlexiMind, an innovative mobile application designed to support children aged 6–10 with specific learning disorders, including dyslexia, dysgraphia, and dyscalculia. By integrating evidence-based instructional strategies and leveraging modern technologies, the application delivers an inclusive and interactive learning environment. The app comprises four core modules: Dyslexia Assessment, Tamil Letter Learning, Math Hands, and Word Recognition & Sentence Construction. These modules employ multisensory approaches, including real-time feedback, gesture-based learning, and machine learning algorithms, to enhance cognitive, linguistic, and mathematical skills. Preliminary findings highlight significant improvements in handwriting accuracy, letter recognition, phonemic awareness, and mathematical comprehension among children using FlexiMind. With its focus on Tamil language support and an adaptive design, FlexiMind addresses the unique needs of Tamil-speaking children while offering scalable solutions for broader educational contexts. This study underscores the potential of technology-driven tools in transforming learning experiences for children with specific learning disorders.
Paper Presenters
avatar for S.V. Sangkavi

S.V. Sangkavi

Sri Lanka
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

Investigating Behavioral Responses across Landslide Scenarios in Virtual Reality
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - Arjun Mehra, Arti Devi, Ananya Sharma, Sahil Rana, Shivam Kumar, K V Uday, Varun Dutt
Abstract - Virtual reality holds enormous potential for disaster preparedness; yet, little is known about how varying landslide risk levels and environmental elements (day vs night) impact people's physiological and psychological responses to these simulated catastrophes. By utilizing behavioral measures (Euclidean distance around collision, number of collisions, and velocity around collision), this study closes this gap by investigating stress and cognitive responses. Eighty volunteers were divided into four groups at random, and each group was exposed to a distinct set of landslide probabilities under various conditions: low likelihood during the day, high probability during the day, and high probability at night. The findings indicate that perceived risk significantly increased behavioral measurements, independent of time of day. These results demonstrate VR's capacity to improve cognitive engagement and equip participants to handle the psychological difficulties that arise in actual crisis scenarios.
Paper Presenters
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

Multi-Level Hybrid Ensemble with Attention based meta-learner for Diabetic Retinopathy Prediction
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - Akriti Agarwal, Harshvardhan Singh Gahlaut, Annie Jain, Shalini L
Abstract - Most common complication of Diabetic mellitus is Diabetic Retinopathy: It causes the lesion to occur upon the retina and affects vision if not diagnosed early it triggers blindness. Diabetic retinopathy should be treated by an early diagnosis to avoid irreversible loss of vision. In addition, the manual diagnosis by ophthalmologists is less efficient and can easily miss the smallest detail that, in some cases, may not be visible to naked human eyes compared to the computer-aided systems. This implies proposing an existing supervised learning strategy for detection of DR from retinal fundus images to a hybrid combination of both deep learning InceptionV3 and ResNet and a machine learning model, namely Random Forest and Support Vector Machine. The model architecture incorporates advanced neural networks fused with classifiers which is further tuned and added up with an attention mechanism ensuring robust and one of the most accurate classification model of DR and non-DR cases. The dataset comprises of 30,000 fundus images which is preprocessed and augmented to improve model performance, hence addressing class imbalance. Additionally, a front-end app with Grad-CAM analysis is developed to classify DR and Non-DR images and visualize where the model focuses during classification.
Paper Presenters
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

Semantic Segmentation of Buildings using Optical Satellite Images and Deep Learning
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - Nadia Liz Quispe Siancas, Julian Llanto Verde, Wilder Nina Choquehuayta
Abstract - Semantic segmentation of buildings using optical satellite images and deep learning techniques is essential for urban planning and monitoring, especially in suburban areas. In this study, we focused on evaluating the performance of six deep learning models: DeepLabV3 MobileNetV3, DeepLabV3 ResNet50, FCN ResNet50, EfficientNet-B0, ResNet101, and UNET. The dataset was collected from the province of Mariscal C´aceres, specifically in the district of Juanju´ı, located in the department of San Mart´ın, situated in the northeast of Peru. Our analysis revealed varying levels of precision for each model: DeepLabV3 MobileNetV3 achieved 74.14%, DeepLabV3 ResNet50 reached 83.35%, FCN ResNet50 attained 83.56%, EfficientNet-B0 yielded 61.37%, ResNet 101 obtained 63.60%, and UNET demonstrated 74.54%. These results provide insights into the effectiveness of different deep learning architectures for semantic segmentation tasks in suburban environments.
Paper Presenters
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

2:00pm GMT

UAV image motion deblurring methods in precision agriculture: A Bibliometric Analysis to A Short Survey
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Authors - Ambroise D. K. Houedjissin, Arnaud Ahouandjinou, Manhougbe Probus A. F. KIKI, Francois Xavier Ametepe, Kokou M. Assogba
Abstract - Image motion deblurring is an important issue in computer vision applications which encounter challenges like motion blur caused by camera shake, fast motion or irregular deformation of agricultural living things during image acquisition. Images acquired by UAV-embedded cameras are often blurred and usually error-prone in precision agriculture. So, image deblurring in applications such as plant phenotyping recognition, crop pests and diseases detection or animal behavior analysis is a great challenge. The main purpose of this paper is to carry out both a bibliometric analysis to assess the current research trends on UAV image motion deblurring with a brief survey of the main image motion deblurring techniques in agriculture. So, we used the Scopus database and 2138 articles were retrieved. This dataset has then been analyzed using a bibliometric tool. According to results, the most impactful authors have 53 and 46 publications respectively. Remote Sensing is the most impactful journal with an h-index of 49 and 285 published articles whereas China is the country with the most impactful production and the most cited document, indicating its considerable influence in this area of research. Results from the short survey indicate that further research is needed to develop more robust and efficient motion deblurring techniques tailored to the specific challenges of UAV imagery in precision agriculture.
Paper Presenters
Thursday February 20, 2025 2:00pm - 3:30pm GMT
Virtual Room B London, United Kingdom

3:30pm GMT

Session Chair Remarks
Thursday February 20, 2025 3:30pm - 3:33pm GMT
Thursday February 20, 2025 3:30pm - 3:33pm GMT
Virtual Room B London, United Kingdom

3:33pm GMT

Closing Remarks
Thursday February 20, 2025 3:33pm - 3:35pm GMT
Thursday February 20, 2025 3:33pm - 3:35pm GMT
Virtual Room B London, United Kingdom
 

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Inaugural Session
  • Physical Technical Session 1
  • Physical Technical Session 2
  • Virtual Room 4A
  • Virtual Room 4B
  • Virtual Room 4C
  • Virtual Room 4D
  • Virtual Room 4E
  • Virtual Room 5A
  • Virtual Room 5B
  • Virtual Room 5C
  • Virtual Room 5D
  • Virtual Room 5E
  • Virtual Room 6A
  • Virtual Room 6B
  • Virtual Room 6C
  • Virtual Room 6D
  • Virtual Room 7A
  • Virtual Room 7B
  • Virtual Room 7C
  • Virtual Room 7D
  • Virtual Room 8A
  • Virtual Room 8B
  • Virtual Room 8C
  • Virtual Room 8D
  • Virtual Room 8E
  • Virtual Room 9A
  • Virtual Room 9B
  • Virtual Room 9C
  • Virtual Room 9D
  • Virtual Room 9E
  • Virtual Room_10A
  • Virtual Room_10B
  • Virtual Room_10C
  • Virtual Room_10D
  • Virtual Room_11A
  • Virtual Room_11B
  • Virtual Room_11C
  • Virtual Room_11D
  • Virtual Room_12A
  • Virtual Room_12B
  • Virtual Room_12C
  • Virtual Room_12D
  • Virtual Room_12E
  • Virtual Room_13A
  • Virtual Room_13B
  • Virtual Room_13C
  • Virtual Room_13D
  • Virtual Room_14A
  • Virtual Room_14B
  • Virtual Room_14C
  • Virtual Room_14D
  • Virtual Room_15A
  • Virtual Room_15B
  • Virtual Room_15C
  • Virtual Room_15D