Samuel Kakuba is an Assistant Lecturer of Computer and Communications Engineering at Kabale University, currently pursuing a Ph.D. in Electronics and Electrical Engineering (Information and Communication Engineering) at Kyungpook National University, South Korea. His research focuses on Artificial Intelligence (AI) and Intelligent Signal Processing, with a strong interest in communication systems and AI applications in electronics engineering.

He holds an M.Sc. in Data Communication & Software Engineering (Communications) from Makerere University and a B.Sc. in Computer Engineering from Busitema University. Samuel is passionate about advancing the Fourth Industrial Revolution through teaching and research in areas such as Internet of Things (IoT), AI systems using machine learning, soft computing, and deep learning.

Currently, he is a research scholar in intelligent speech and radar signal processing for human behavior prediction and affective computing. Since 2011, he has supervised and mentored student research across several universities. His academic vision is to bridge AI and practical engineering to deliver affordable, impactful solutions for local communities.

To view Samuel’s publications, click here: https://scholar.google.com/citations?user=yaAUeFYAAAAJ&hl=en

Qualifications

 

  • Ph.D Scholar of Electronics and Electrical Engineering (2021 to date) Kyungpook National University, South Korea
  • MSc. Data Communication and Software Engineering (2018)- Makerere University, Uganda
  • BSc. Computer Engineering (2011)- Busitema University, Uganda

Research Interests

Communication Systems Engineering,  Internet of Things,  Human Behavior Prediction, Artificial Intelligence Systems using Machine Learning and Deep Learning.

Current Research: Speech and Radar Signal Processing, Human Behavior Prediction and Affective Computing using Deep Learning (Computer Vision and NLP)

Publications

Projects

No Projects found.

Supervision

No Supervisions yet. Try again later

Presentations

Inception-inspired local feature learning with attention mechanism for speech emotion recognition, Presented at the 32nd Joint Conference on Communication and Information (JCCI), April 27th to 29th, 2022.

Residual bidirectional LSTM with multi-head attention for speech emotion recognition, Presented at the Korean Institute of Communications and Information Sciences (KICS) Summer Conference 2022, June 22nd to 24th, 2022.

Performance comparison of raw speech signals and handcrafted features in deep learning-based emotion recognition models, presented at the 3rd Korean Artificial Intelligence Conference, September 28th to 29th, 2022.

Speech emotion recognition using context-aware dilated convolution network,  presented at the 27th Asia-Pacific International Conference on Communications (APCC), October 19th to 21st, 2022.

Bimodal Speech Emotion Recognition using Fused Intra and Cross Modality Features, presented at the 14th International Conference on Ubiquitous and Future Networks (ICUFN), July 4th to 7th, 2023.

More Information

Share Details

QR Code