EyeTrack

to

This project was completed as part of the course CSE 40535: Computer Vision at the University of Notre Dame. The project was intended to implement a computer vision model that is able to detect, track, and accurately determine where a subject is looking at via a camera feed. Then translate the data from the model to mouse movement, and incorporated gesture recognition - for clicking, gesturing, etc. Group members included Santiago Rodriguez, Jose Benitez, and Gustavo Aniceto.


Responsibilities

  • Developed a CV model that is able to detect, track, and accurately determine where a subject is looking at via a camera feed. Then translate the data from the model to mouse movement, and incorporated gesture recognition - for clicking, gesturing, etc.
  • Carefully designed an interface that allowed for users to intelligently interact with the software, without disrupting the traditional use of a computer.
  • Our software is broken up into two components - Gaze Tracking and Gesture Detection
  • Gaze Tracking: We used OpenCV to track the gaze vectors and determine where they are looking at on the screen. We then used this data to move the mouse cursor to the desired location.
  • Gesture Detection: We used OpenCV to detect hand gestures and translate them into mouse clicks, drags, and other actions.

Skills

Python OpenCV PyTorch

Repo


Github