LAU Yat Tin Andy and LAU Kin Wing won the Second Runner-up in the 9th Final Year Project (FYP) Competition of IEEE (Hong Kong) Computational Intelligence Chapter
Congratulations to undergraduate students LAU Yat Tin Andy and LAU Kin Wing supervised by Prof. Jia Jiaya Leo, won the Second Runner-up in the 9th Final Year Project (FYP) Competition of IEEE (Hong Kong) Computational Intelligence Chapter 2012.
Project name: Controlling PC by XBOX *Kinect*
Kinect is a motion sensing input device produced by Microsoft originally for the Xbox 360 video game console. It enables the users to interact with Xbox 360 without having a game controller, through a natural user interface using actions and spoken languages. Due to success of the brand new technology of Kinect, Microsoft produces a new product which has great potential for both developers and users: Kinect for Windows.
This project aims at exploring the possibility of using it to interact with PC because we think that the focus of the next generation should be user experience. In the first semester, we have implemented static pose recognition function. With the knowledge and experiences gained from first semester, we are ready to take the next step, which is the action recognition, and finally we have integrated our functionality into our final product --- Kinect Presenter so as to demonstrate the effort we have made.
We are using the official Kinect SDK Beta 2 which is released by Microsoft to implement the Kinect Presenter, an application for user to present using Microsoft PowerPoint through doing some intuitive actions. It enables user to present without touching the mouse and keyboard, trying to jump out the traditional way of controlling the PC / PowerPoint.
Using C++, we build our application base on one of the sample program of the Kinect SDK Beta 2, and further merge it with the pose recognition function which we implemented in last semester as well as an open source library named LIBSVM for doing the machine training and action recognizing part. No raw data liked depth image is accessed, only the intuitive skeletal data and its relative functions are used in the development.
The application is trained and tested by a couple of people, which span over different height, wearing different clothes, slim or fat, etc. This makes the application becomes more reliable and user-friendly.