Research Project Description


My Favorite Publication

See also my favorite publication for my major research.


If you are interested in anyone of the following projects, feel free to drop by my office (HSB 1015) and have a chat.


Genome-convergence visualization of our EA on GPU

Evolutionary Computing and Genetic Algorithms on GPU
Evolutionary algorithms (EAs) are weak search and optimization techniques inspired by natural evolution. Genetic algorithms (GA) are one well-known class of EAs. Although EAs are effective in solving many practical problems in science, engineering, and business domains, they may execute for a long time to find solutions for some huge problems, because several fitness evaluations must be performed. Specialized parallel hardware can be built to speed up the process, but these hardware are relatively more difficult to use, manage, and maintain.

With the advances of modern consumer-level GPUs, we can design parallel EAs that fits on SIMD-based GPU. With the low-cost GPU equipped on ordinary PC, more people will be able to use our parallel algorithms to solve huge problems encountered in real-world applications. However, we have shown that it is conceptually wrong and infeasible to generate random numbers on current generation of GPU because these random numbers will lead to poor performance of EAs. In this project, we study seriously and in-depth how to implement EAs on consumer-level GPU.

We pioneer the GPU application in evolutionary computing. Our project started in 2003 and has already been recognized by the research community of evolutionary computing. Our work have been accepted for publication in the prestige journal IEEE Intelligent Systems and the largest evolutionary computing conference IEEE Congress on Evolutionary Computation 2005 (CEC 2005). To promote the usage of GPU for evolutionary computing, we have made the source code of our implementation available here.

Related publication:

  • " Evolutionary Computing on Consumer Graphics Hardware",
    K. L. Fok, T. T. Wong and M. L. Wong,
    IEEE Intelligent Systems, Vol. 22, No. 2, March/April 2007, pp. 69-78.

    (submitted 2004; revised April 2005; accepted July 2005)

  • "Parallel Evolutionary Algorithms on Graphics Processing Unit",
    M. L. Wong, T. T. Wong and K. L. Fok,
    in Proceedings of IEEE Congress on Evolutionary Computation 2005 (CEC 2005), Vol. 3, Edinburgh, UK, September 2005, pp. 2286-2293.



Result from the FYP of Ping-Hay Lai and Yiu-Kei Cheung

GPU Shader Techniques  
Traditional graphics hardware accelerators are basically a rendering black-box. Users pass the polygons, texture, and lighting configuration to the hardware. The hardware returns an image. However, due to the black-box design, there are many restrictions in the rendering process.

Current commodity graphics units (GPU) have built-in with a shader hardware. It allows the users to develop shader programs to drive the rendering. This offers much flexibilities to game programmers and graphics researchers. In this project, we exploit the hardware capabilities, develop shaders and generate interesting images in real-time. These images are not possible to be synthesized in real-time using pure software implementation.

For example (see left image), one of our projects generates real-time cartoon image from 3D geometric model. Cartoon rendering can be done using software shader programming like Pixar RenderMan. But it takes forever to render. Developing shaders on the commodity graphics hardware requires the good knowledges of computer graphics, shader programming, parallal programming (because the hardware is a parallel machine) and some mathematics.

Related publication:


Data Compression for Image-based Modeling and Rendering
Image-based computer graphics trades memory and disk space for rendering speed. It is not suprise that the data set of image-based computer graphics system is tremendously large. Without compression, Internet application of image-based computer graphics systems is impractical. 

To overcome the problem, this project will investigate the possibilities to exploit the data coherence in the image-based data. It will be interesting to observe that data in image-based system is sometimes similiar to video systems. Some compression techniques in video systems can be made use. However, the fundamental difference between these two systems is that video systems involve restricted user interaction such as forward/backward playback, while image-based systems involve complex user navigation and illumination control. Hence, standard video compression techniques cannot be naively applied to image-based systems. We need to design specialized codec for image-based systems.


Related publication:


Image-based Relighting for Computer Games
Computer graphics has been a hot research and commercial area. One of its major applications is computer games. However, even with the state-of-the-art hardware and software technologies, rendering a forest scene in real-time is still impractical. Image-based rendering is a new stream of computer graphics to "generate images using images". It has a nice feature that the time complexity of the renderer is independent of scene geometry. In other words, real-time rendering of a forest scene with millions of trees will be possible.  

One fundamental function of traditional graphics is lighting. It becomes problematic in image-based computer graphics. 

We started the research of image-based relighting in 1996 and received much attention in the community. The direct application of image-based relighting is computer games, where the complex background can be represented as a relightable panorama. This relightable panorama can be relit in real-time. By mapping to 3D object surface, it can be applied for appearance modeling.

Related publication:

[an error occurred while processing this directive]

The Immersive Cockpit
Tele-immersion is a rapidly growing technology which allows geographically separated users to collaborate in a shared virtual environment. In this project, we develop the techniques for the real-time construction of live panoramic video from multiple live video streams obtained by normal CCD video cameras.

Using techniques of generating image mosaic, the video frames are combined together to form a large field of view image. During the run-time, the live video streams are used as texture map which are rendered into a live panoramic video stream. The generated video is then projected onto an immersive display, the VisionStation.

Related publication:


Modeling Natural Imperfections
Computer generated images has been critized to be too perfect, too clean and hence too artificial. In 1995, we have developed an automatic method to generate dust and other blemishes on the geometric models. The technique has been used in commerical packages like "Dirty Reyes". However, artists still now spend hours of time to fine tune the computer generated image to make it looks real (imperfect).

In this project we will investigate automatic method to generate different types of imperfection. The goal is to generate "realistic" (imperfect) images automatically.


Related publication:


Home
Copyright 1996-2012 Tien-Tsin Wong. All rights reserved.