Profile Image
Assistant Lecturer

lamis Ali Hussein

Research Interests

Artificial Intelligence (AI)

Deep Learning

Computer Vision

Robotics

Gender FEMALE
Place of Work Technical Management Institute Nineveh
Department Computer Systems Techniques
Position Responsible for the Quality Assurance and University Performance Division
Qualification Master
Speciality Computer Engineering
Email lames.ali@ntu.edu.iq
Phone 07722213808
Address Mosul- Al-Siddiq, Nineveh, Mosul, Iraq

Skills

Python Programmimg (95%)
Machine Learning (90%)
C++ Programming (95%)
working experience

Academic Qualification

Bsc
Jan 7, 2008 - May 4, 2025

Computer Engineering
College of Engineering
Mosul University

Master
May 5, 2025 - Present

Northern Technical University
Technical College of Engineering / Mosul
Computer Engineering Techniques

Publications

ArSLR-ML: A Python-based machine learning application for arabic sign language recognition
Apr 1, 2025

Journal Software Impacts

publisher Elsevier B.V

DOI https://doi.org/10.1109/ICETI63946.2024.10777193

Issue 1

Volume 24

The ArSLR-ML is a real-time interactive application that uses multi-class Support Vector Machines (SVM) machine learning applied in the classification procedure and MediaPipe in the feature extraction procedure to recognize static Arabic sign language gestures, focusing on numbers and letters and translating them into text and Arabic audio output. The ArSLR-ML was built within the PyCharm IDE using Python with a graphical user interface (GUI), thereby allowing for effective recognition of gestures. The application utilizes the laptop camera and GUI to capture hand gestures to create a dataset for machine learning models and implement them in real time.

Read Publication

Static Arabic Sign Language Recognition in Real Time Using Machine Learning and MediaPipe
Dec 11, 2024

Journal 2024 1st International Conference on Emerging Technologies for Dependable Internet of Things (ICETI)

publisher IEEE

DOI 10.1109/ICETI63946.2024.10777193

Sign language is a form of visual communication used by individuals who are deaf or hard of hearing to communicate. This visual language relies on gestures, handshapes, facial expressions, and body movements to convey meaning rather than spoken words. Therefore, to improve the lives of deaf or hard of hearing people in the Arab community, a more comfortable approach to learning and working must be developed. This paper presents an interactive computer vision-based system for recognizing static hand gestures (letters and numbers) in Arabic sign language in real-time. The MediaPipe framework is used to extract features (hand landmarks) from each image, and a support vector machine to recognize the static gesture inputted in front of the camera and then translate it into its equivalent text and voice, an approach developed to bridge the communication gap between deaf and hearing people. Experiments were conducted using HopeArSL, a large dataset that collected 12,000 images of 40 Arabic sign language gestures. This approach's experiments showed 100% and 99.94% training accuracy for numbers and alphabet letters, respectively. The accuracy and average response time in real-time for numbers and alphabet letters were (98.08%, 1.43 ms) and (99.09%, 7.48 ms), respectively.

Read Publication

Advancements in Robotic Systems for Sign Language Representation: A Review
Oct 28, 2023

Journal European Journal of Interdisciplinary Research and Development

DOI 10.1109/ICETI63946.2024.10777193

Issue 2

Volume 20

When compared to spoken language, sign language offers a rich linguistic medium that is essential for the deaf and hard-of-hearing community. In order to close the communication gap, there is a great need for the development of robotics and robotic hands that are capable of accurately understanding and replicating sign language gestures. In this review paper, academic research and technological advancements are integrated. A variety of robotic system development methods and tools are presented, and used to interpret and create sign language, it also covers the difficulties and potential uses of these systems' technical advancement. Robotic systems can represent sign language in a variety of ways, including humanoid robots, signing avatars, robotic arms, robots that can be remotely viewed, gesture-controlled robots, and custom-built systems. Each has advantages and disadvantages, and the choice is based on the specific use case and requirements. Also, the paper discusses how human-robot interaction improves sign language representation and how robotic systems can help deaf and hard-of-hearing people learn, It shows how they affect learning and life. These systems can be used in research, education, and interactive settings. The comprehensive review of robotic systems for sign language representation will help researchers and practitioners advance inclusive communication technologies.

Read Publication

Conferences

Conferences

Static Arabic Sign Language Recognition in Real Time Using Machine Learning and MediaPipe
Nov 25, 2024 - Nov 26, 2024

Publisher IEEE

DOI 10.1109/ICETI63946.2024.10777193

Country Yemen

Location Sana'a

Visit Conference