Loading…
10th International Congress on Information and Communication Technology in concurrent with ICT Excellence Awards (ICICT 2025) will be held at London, United Kingdom | February 18 - 21 2025.
or to bookmark your favorites and sync them to your phone or calendar.
Type: Virtual Room 5C clear filter
Wednesday, February 19
 

11:43am GMT

Opening Remarks
Wednesday February 19, 2025 11:43am - 11:45am GMT
Wednesday February 19, 2025 11:43am - 11:45am GMT
Virtual Room C London, United Kingdom

11:45am GMT

Blockchain-enabled smart contract adoption in infrastructure PPP projects: understanding the driving forces within the TOE framework
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors -
Abstract -
Paper Presenters
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

11:45am GMT

Enhanced Aerial Scene Classification Through ConvNeXt Architectures and Channel Attention
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors - Leo Thomas Ramos, Angel D. Sappa
Abstract - This work explores the integration of a Channel Attention (CA) module into the ConvNeXt architecture to improve performance in scene classification tasks. Using the UC Merced dataset, experiments were conducted with two data splits: 50% and 20% for training. Models were trained for up to 20 epochs, limiting the training process to assess which models could extract the most relevant features efficiently under constrained conditions. The ConvNeXt architecture was modified by incorporating a Squeeze-and-Excitation block, aiming to enhance the importance of each feature channel. ConvNeXt models with CA showed strong results, achieving the highest performance in the experiments conducted. ConvNeXt large with CA reached 90% accuracy and 89.75% F1-score with 50% of the training data, while ConvNeXt base with CA achieved 77.14% accuracy and 75.23% F1-score when trained with only 20% of the data. These models consistently outperformed their standard counterparts, as well as other architectures like ResNet and Swin Transformer, achieving improvements of up to 9.60% in accuracy, highlighting the effectiveness of CA in boosting performance, particularly in scenarios with limited data.
Paper Presenters
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

11:45am GMT

Geo-spatial and Temporal Analysis of Hadith Narrators
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors - Hamada R. H. Al-Absi, Devi G. Kurup, Amina Daoud, Jens Schneider, Wajdi Zaghouani, Saeed Mohd H. M. Al Marri, Younss Ait Mou
Abstract - This study integrates traditional Science of Hadith literature —documenting the sayings, actions, and approvals of the Prophet Muhammad (PBUH) with modern digital tools to analyze the geographic and temporal data of Hadith narrators. Using the Kaggle Hadith Narrators dataset, we apply Kernel Density Estimation (KDE) to map the spatial distribution of narrators’ birthplaces, places of stay, and death locations across generations, revealing key geographical hubs of Hadith transmission, such as Medina, Baghdad, and Nishapur. By examining narrators’ timelines and locations, we illustrate movement patterns and meeting points over time, providing insights into the spread of Hadith across the Islamic world during early Islamic history. To our knowledge, this research is the first systematic attempt to analyze Hadith transmission using geo-spatial and temporal methods, offering a novel perspective on the geographic and intellectual dynamics of early Islamic scholarship.
Paper Presenters
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

11:45am GMT

Joined Video Description from Multiple Sources
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors - Francisco Seipel-Soubrier, Jonathan Cyriax Brast, Eicke Godehardt, Jorg Schafer
Abstract - We propose an architecture of a proof-of-concept for automated video summarization and evaluate its performance, addressing the challenges posed by the increasing prevalence of video content. The research focuses on creating a multi-modal approach that integrates audio and visual analysis techniques to generate comprehensive video descriptions. Evaluation of the system across various video genres revealed that while video-based large language models show improvements over image-only models, they still struggle to capture nuanced visual narratives, resulting in generalized output for videos without a strong speech based narrative. The multi-modal approach demonstrated the ability to generate useful short summaries for most video types, but especially in speech-heavy videos offers minimal advantages over speech-only processing. The generation of textual alternatives and descriptive transcripts showed promise. While primarily stable for speech-heavy videos, future investigation into refinement techniques and potential advancements in video-based large language models holds promise for improved performance in the future.
Paper Presenters
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

11:45am GMT

Optimized Edge AI Framework with Image Processing for Speed Prediction in Semi-Automated Electric Vehicles
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors - A.G.H.R. Godage, H.R.O.E. Dayaratna
Abstract - This study explores the implementation of edge computing for semi-automated vehicle systems in urban environments, leveraging modern wireless technologies such as 5G for efficient data transmission and processing. The proposed framework integrates a vehicle-mounted camera, an edge server, and deep learning models to identify critical objects, such as pedestrians and traffic signals, and predict vehicle speeds for the subsequent 30 seconds. By offloading computationally intensive tasks to an edge server, the system reduces the vehicle’s processing load and energy consumption, while embedded offline models ensure operational continuity during network disruptions. The research focuses on optimizing image compression techniques to balance bandwidth usage, transmission speed, and prediction accuracy. Comprehensive experiments were conducted using the Zenseact Open Dataset, a new dataset published in 2023, which has not yet been widely utilized in the domain of semi-automated vehicle systems, particularly for tasks such as predictive speed modeling. The study evaluates key metrics, including bandwidth requirements, round-trip time (RTT), and the accuracy of various machine learning and neural network models. The results demonstrate that selective image compression significantly reduces transmission times and overall RTT without compromising prediction quality, enabling faster and more reliable vehicle responses. This work contributes to the development of scalable, energy-efficient solutions for urban public transport systems. It highlights the potential of integrating edge AI frameworks to enhance driving safety and efficiency while addressing critical challenges such as data transmission constraints, model latency, and resource optimization. Future directions include extending the framework to incorporate multi modality, broader datasets, and advanced communication protocols for improved scalability and robustness.
Paper Presenters
avatar for A.G.H.R. Godage
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

11:45am GMT

Semantic Landscape of Legal Lexicons: Unpacking Medical Decision Making Controversies
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Authors - Haesol Kim, Eunjae Kim, Sou Hyun Jang, Eun Kyoung Shin
Abstract - This study empirically examined historical trajectories of the semantic landscape of legal conflicts over medical decision making. We unveiled the lexical structures of lawsuit verdicts, tracing how the core concepts of shared decision making (SDM)-duty of care, duty to explain, self-determination-have developed and been contextualized in legal discourses. We retrieved publicly available court verdicts using the search keyword ‘patient’ and screened them for relevance to doctor-patient communications. The final corpus comprised 251 South Korean verdicts issued between 1974 and 2023. We analyzed the verdicts using neural topic modeling and semantic network analysis. Our study showed that topic diversity has expanded over time, indicating increased complexity of semantic structures regarding medical decision-making conflicts. We also found two dominant topics: disputes over healthcare providers’ liability and disputes over the compensation for medical malpractice. The results of semantic network analysis showed that the rhetorics of patients’ right to medical self-determination are not closely tied to the professional responsibility to explain and care. The decoupled semantic relationships of patients’ right and health professionals’ duties revealed the barriers of SDM implementations.
Paper Presenters
Wednesday February 19, 2025 11:45am - 1:15pm GMT
Virtual Room C London, United Kingdom

1:15pm GMT

Session Chair Remarks
Wednesday February 19, 2025 1:15pm - 1:17pm GMT
Wednesday February 19, 2025 1:15pm - 1:17pm GMT
Virtual Room C London, United Kingdom

1:17pm GMT

Closing Remarks
Wednesday February 19, 2025 1:17pm - 1:20pm GMT
Wednesday February 19, 2025 1:17pm - 1:20pm GMT
Virtual Room C London, United Kingdom
 

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
  • Inaugural Session
  • Physical Technical Session 1
  • Physical Technical Session 2
  • Virtual Room 4A
  • Virtual Room 4B
  • Virtual Room 4C
  • Virtual Room 4D
  • Virtual Room 4E
  • Virtual Room 5A
  • Virtual Room 5B
  • Virtual Room 5C
  • Virtual Room 5D
  • Virtual Room 5E
  • Virtual Room 6A
  • Virtual Room 6B
  • Virtual Room 6C
  • Virtual Room 6D
  • Virtual Room 7A
  • Virtual Room 7B
  • Virtual Room 7C
  • Virtual Room 7D
  • Virtual Room 8A
  • Virtual Room 8B
  • Virtual Room 8C
  • Virtual Room 8D
  • Virtual Room 8E
  • Virtual Room 9A
  • Virtual Room 9B
  • Virtual Room 9C
  • Virtual Room 9D
  • Virtual Room 9E
  • Virtual Room_10A
  • Virtual Room_10B
  • Virtual Room_10C
  • Virtual Room_10D
  • Virtual Room_11A
  • Virtual Room_11B
  • Virtual Room_11C
  • Virtual Room_11D
  • Virtual Room_12A
  • Virtual Room_12B
  • Virtual Room_12C
  • Virtual Room_12D
  • Virtual Room_12E
  • Virtual Room_13A
  • Virtual Room_13B
  • Virtual Room_13C
  • Virtual Room_13D
  • Virtual Room_14A
  • Virtual Room_14B
  • Virtual Room_14C
  • Virtual Room_14D
  • Virtual Room_15A
  • Virtual Room_15B
  • Virtual Room_15C
  • Virtual Room_15D