IWACI2008 Program

Vladimir Cherkassky

Vladimir Cherkassky
Department of Electrical and Computer Engineering
University of Minnesota
USA


Title: Introducing a Priori Knowledge Using Inference Through Contradiction

Abstract: This talk describes new methodology for introducing a priori knowledge in the form of additional (unlabeled) data called Universum samples. This approach introduced by Vapnik (2006) emphasizes the importance of a priori knowledge about admissible data, as opposed to knowledge about good (or true) model used in traditional machine learning approaches. Incorporating such data-driven a priori knowledge into the learning process leads to a new type of inductive inference called Inference Through Contradiction. This new methodology allows improved generalization for difficult (high-dimensional and sparse) data. This talk will illustrate several methodological and practical issues associated with Inference Through Contradiction, in the context of a real-life application: gender classification of human faces.

Biodata: Vladimir Cherkassky is a Professor of Electrical and Computer Engineering at the University of Minnesota, Twin Cities. He received Ph.D. in Electrical Engineering from University of Texas at Austin in 1985. His current research is on methods for predictive learning from data, and he has co-authored a monograph Learning From Data published by Wiley in 1998. Prof. Cherkassky has served on the Governing Board of INNS. He has served on editorial boards of IEEE Transactions on Neural Networks, the Neural Networks Journal, the Natural Computing Journal and the Neural Processing Letters. He served on the program committee of major international conferences on Artificial Neural Networks. He was Director of NATO Advanced Study Institute (ASI) From Statistics to Neural Networks: Theory and Pattern Recognition Applications held in France, in 1993. He presented numerous invited talks and tutorials on neural network and statistical methods for learning from data. In 2007, he was elected as Fellow of IEEE for ‘contribution and leadership in statistical learning and neural networks’.

Fred Ham

Fredric Ham
Department of Electrical and Computer Engineering
Florida Institute of Technology
USA


Title: Distributed Neurocomputing for Event Classification

Abstract: Classifier performance results are compared for a distributed bank of Partial Least-Squares Regression (PLSR) calibration models and the same classifier architecture but using Radial Basis Function (RBF) neural networks. PLSR is a statistical modeling method from the class of techniques know as chemometrics used in analytical chemistry to quantitatively analyze spectrophotometric data. Both the PLSR and RBF distributed classifier architectures are used to discriminate between different infrasound events. Infrasound is a longitudinal pressure wave with characteristics similar to audible acoustic waves but in a frequency range far below what the human ear can detect. The typical frequency range for infrasound is between 0.01 Hz and 10 Hz. An integral part of the comprehensive nuclear test ban treaty (CTBT) international monitoring system (IMS) is an infrasound network system capable of detecting nuclear events. Other man-made events that can produce infrasound are the space shuttle, vehicles, rockets, artillery fire, and high-speed aircraft, to name a few. Moreover, nature is an incredible source of infrasound, such as volcano eruptions, earthquakes, tsunamis, bolides (meteors), microbaroms (infrasound radiated from ocean waves), surf, mountain ranges (mountain associated waves), avalanches, and auroral waves, to name a few. Because of the relatively low atmospheric absorption at low frequencies, infrasound waves can travel long distances in the earth’s atmosphere and can be detected with sensitive ground-based sensors. Each module (from the bank of modules within the distributed classifier architecture) is responsible for classifying one of several infrasound events, and thus is trained to identify only that particular event. The output threshold of each module is set according to a specific 3-D Receiver Operating Characteristic (ROC) curve as opposed to using classic 2-D ROC curves. Preprocessing of the infrasound signals is carried out by extracting an optimal set of cepstral coefficients and their associated derivatives that form the feature vectors used to train and test the distributed classifier modules. This pre-processing method is similar to that used for speaker recognition. When a set of optimal feature vectors is used for training and testing, there is a significant increase in the Correct Classification Rate (CCR). For both classifier types, that is, using either the PLSR model or the RBF neural network within each module, greater than 95% CCR can be achieved.

Biodata: Dr. Ham is Harris Professor of Electrical Engineering at Florida Institute of Technology in Melbourne. He is also the Director of the Information Processing Laboratory. During the 2004-2005 academic year he was the Interim Dean of the College of Engineering and from 2000-2005 he was the Associate Dean for Research in the College of Engineering. He worked 10 years in industry before coming to Florida Tech in 1988. He was with the Harris Corporation from 1980 to 1988 and the Shell Oil Company from 1976 to 1977. He is currently the President of the International Neural Network Society (2007-2008), President Elect of INNS (2006), and Secretary of INNS (2004-2005). He is a Senior Member of IEEE and was an Associate Editor for the IEEE Transactions on Neural Networks from 2001 to 2007. He has published over 100 technical papers and reports, mostly in the areas of neural networks, digital signal processing and biomedical engineering (specifically, biosensors). He holds 3 U.S. patents and is author of the textbook: Principles of Neurocomputing for Science and Engineering, McGraw-Hill, 2001. Dr. Ham's current research interests include: applications of neural networks, adaptive signal processing, biosensor development for non-invasive blood glucose monitoring, and development of neural-based classification methods using infrasound for monitoring nuclear explosions to support the Comprehensive Nuclear Test Ban Treaty.

Nikola Kasabov

Nikola Kasabov
Knowledge Engineering and Discovery Research Institute
Auckland University of Technology
New Zealand


Title: Neural-, Genetic-, and Quantum Information Processing: Towards an Integrative Connectionist Theory and Systems

Abstract: The paper reviews the main principles of information processing at neuronal-, genetic-, and quantum information levels. Each of these levels has already inspired the creation of efficient computational models, such as: artificial neural networks for learning; evolutionary computation for optimization; gene and protein interaction networks; quantum computation for fast parallel processing. The paper extends these paradigms to integrative connectionist models and systems, so that they integrate in their structure and algorithms principles from different hierarchical levels of information processing in their dynamic interaction. Examples given include: evolving spiking neural networks, applied to adaptive multimodal audio-visual information processing; integrative computational neurogenetic models applied to modeling brain functions; quantum evolutionary algorithms for exponentially faster optimization; quantum neural networks for building exponentially larger associative memories. The new models are significantly faster in feature selection and learning and solve efficiently NP complete biological and engineering problems for adaptive, incremental learning in a large dimensional space. They can also help to better understand complex information processes in Nature and in the brain, especially how information processes at different information levels interact, and to extend our understanding on the fundamental concept of Information, leading to a new Integrative Connectionist Information Theory. Open questions, challenges and directions for further research are presented.

Biodata: Professor Nikola Kasabov is the Founding Director and the Chief Scientist of the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland (www.kedri.info/). He holds a Chair of Knowledge Engineering at the School of Computing and Mathematical Sciences at Auckland University of Technology. He is a Fellow of the Royal Society of New Zealand, Fellow of the New Zealand Computer Society and a Senior Member of IEEE. He is the President-Elect of the International Neural Network Society (INNS) and the President of the Asia Pacific Neural Network Assembly (APNNA). He is a member of several technical committees of the IEEE Computational Intelligence Society and of the IFIP AI TC12. Kasabov is on the editorial boards of several international journals, that include IEEE Trans. NN, IEEE Trans. FS, Information Science, J. Theoretical and Computational Nanoscience. He chairs a series of int. conferences ANNES/NCEI in New Zealand. Kasabov holds MSc and PhD from the Technical University of Sofia. His main research interests are in the areas of intelligent information systems, soft computing, neuro-computing, bioinformatics, brain study, speech and image processing, novel methods for data mining and knowledge discovery. He has published more than 400 publications that include 15 books, 120 journal papers, 60 book chapters, 32 patents and numerous conference papers. He has extensive academic experience at various academic and research organisations: University of Otago, New Zealand; University of Essex, UK; University of Trento, Italy; Technical University of Sofia, Bulgaria; University of California at Berkeley; RIKEN and KIT, Japan; T.University Kaiserslautern, Germany, and others. More information of Prof. Kasabov can be found on the KEDRI web site: http://www.kedri.info.

Robert Kozma

Robert Kozma
Department of Computer Science
University of Memphis
USA


Title: Neuropercolation Model of Neurodynamics and Cognitive Phase Transitions

Abstract: Higher cognition granulates the seemingly continuous temporal sequences of perceptual experiences into meaningful and comprehendible chunks of concepts and complex behavioral schemas. They are accessed during future action selection and decisions. These activities use spatio-temporal patterns emergent over dynamic landscapes, and provide an alternative to the notorious symbol grounding problem. Mathematically, these processes have been successfully described using differential equations (ODEs and PDEs) with distributed parameters.

In an alternative approach, we introduce neuropercolation model based on random graph theory, linking pattern-based spatio-temporal encoding and generalized, non-local percolation and phase transitions in random graphs. Scale-free behavior and small-world effects in cortical tissues are important properties exhibited by neuropercolation. Unlike phase transitions in classical physical systems, neural phase transitions have a mesoscopic character, with intermittent long-range correlations. Noise is a driving source of the corresponding dynamics, producing random processes with apparent deterministic characteristics. They represent a new class of phenomena as a necessary substrate of high-level cognition. Mathematical and computational aspects of the model are elaborated in this talk. Future avenues toward neuropercolation-based dynamic memories are outlined as well.

Biodata: Robert Kozma is Professor of Computer Science, Director of Computational Neurodynamics Laboratory, University of Memphis, TN, since 2000. He holds degrees in Physics (PhD, Delft), Mathematics and Engineering (MSc, Budapest and Moscow). He has been on the faculty of Dept. Quantum Science & Engineering (Tohoku, Japan), Information Sciences (Otago U, NZ). He has held visiting positions at UC Berkeley, Sarnoff Co., Princeton, NJ, AFRL/HAFB, MA, and others. He has over 2 decades of research experience in intelligent systems design and analysis, computational brain research, knowledge acquisition and autonomous decision making in animals and animats. Research in his CND lab has been funded by NASA, NSF, JPL, AFRL, NRC, and other grants. He has published 4 books, ~200 articles in international journals, books, and in peer-reviewed conference proceedings. His recent book “Neurodynamics of Cognition and Consciousness,” co-edited by L. Perlovsky, is published by Springer in Fall, 2007. Dr. Kozma serves on the Board of Governors of the International Neural Network Society. He is Associate Editor of IEEE Transactions on Neural Networks, Neurocomputing, and Journal of Computational Neurodynamics, Area Editor of New Mathematics and natural Computation. He is Program Co-Chair of IJCNN08 at WCCI08 in Hong Kong, and General Chair of IJCNN09, June 2009, Atlanta.

Robert Reynolds

Robert G. Reynolds
Computer Science Department
Wayne State University
USA


Title: Designing Video Game Characters Using Cultural Algorithms

Abstract: We survey the different levels of intelligence that can be used to produce game character. These levels include basic kinematic or movement algorithms, basic decision making and planning, group tactics and strategy. We give examples of how Cultural Algorithms can be used to support character development at each level.

Biodata: Dr. Robert G. Reynolds received his Ph.D. degree in Computer Science, specializing in Artificial Intelligence, in 1979 from the University of Michigan, Ann Arbor. He is currently a professor of Computer Science and director of the Artificial Intelligence Laboratory at Wayne State University. He is an Adjunct Associate Research Scientist with the Museum of Anthropology at the University of Michigan-Ann Arbor, and also affiliated with the Complex Systems Group at the University of Michigan-Ann Arbor and a participant in the UM-WSU IGERT program on Incentive-Based Design. His interests are in the development of computational models of cultural evolution for use in the simulation of complex organizations and in computer gaming applications. Dr. Reynolds produced a framework, Cultural Algorithms, in which to express and computationally test various theories of social evolution using multi-agent simulation models. He has applied these techniques to problems concerning the origins of the state in the Valley of Oaxaca, Mexico, the emergence of prehistoric urban centers, the origins of language and culture, and the disappearance of the Ancient Anazazi in Southwestern Colorado. He has co-authored three books; Flocks of the Wamani (1989, Academic Press), with Joyce Marcus and Kent V. Flannery; The Acquisition of Software Engineering Knowledge (2003, Academic Press), with George Cowan; and Excavations at San Jose Mogote 1: The Household Archaeology with Kent Flannery and Joyce Marcus (2005, Museum of Anthropology-University of Michigan Press). He is also co-editor of four books on evolutionary computation. He has received funding from both government and industry to support his work. He has published over 230 papers on the evolution of social intelligence in journals, book chapters, and conference proceedings. These include IEEE Computer, IEEE Computational Complexity, Scientific American, Complexity, IEEE Transactions of Evolutionary Computation, IEEE Transactions on Systems, Man, and Cybernetics, IEEE Software, Communications of the ACM, and the Proceedings of the National Academy of Sciences. Dr. Reynolds currently teaches courses on game programming and evolutionary computation, data mining, and agent-based modeling. In addition he has given a number of tutorials on Cultural Algorithms, most recently at the IEEE 2007 Spring Symposium on Computational Intelligence. He is currently an associate editor for the IEEE Transactions on Evolutionary Computation, the International Journal of Artificial Intelligence Tools, International Journal of Computational and Mathematical Organization Theory, the International Journal of Software Engineering and Knowledge Engineering, and the Journal of Semantic Computing. He is also a program co-chair for the 2008 IEEE World Congress on Computational Intelligence, program co-chair for 2008 IEEE Swarm Intelligence Symposium, on the Advisory Board for the International Swarm Intelligence Symposium (2007, and current President of the Evolutionary Programming Society among other activities.

Leszek Rutkowski

Leszek Rutkowski
Department of Computer Engineering
Technical University of Czestochowa
Poland


Title: On Designing and Reduction of Flexible Neuro-fuzzy Systems

Abstract: In the first part of the presentation a new class of neuro-fuzzy systems is proposed. Moreover, a novel method for reduction of such systems without deterioration of their accuracy is developed. In consecutive stages of the reduction algorithm we reduce number of inputs, number of rules, number of antecedents and number of discretization points of integrals in the center of area defuzzification method. Then we automatically detect and merge similar input and output fuzzy sets. Through computer simulations it is shown that accuracy of the system after reduction and merging is not worse than before these operations, despite the fact that in some cases 50% of various parameters and 40% of inputs were eliminated. The reduction algorithm is tested using well known classification benchmarks.

In the second part of the presentation the evolutionary strategy is applied for learning flexible neuro-fuzzy systems. In the process of evolution we determine: (i)fuzzy inference (Mamdani type or logical type - described by an S-implication), (ii)concrete fuzzy implication, if the logical type system is found in the process of evolution or concrete t-norm connecting antecedents and consequences, if the Mamdani type system is found in the process of evolution, (iii)concrete t-norm for aggregation of antecedents in each rule, (iv)concrete triangular norm describing aggregation operator, (v)shapes and parameters of fuzzy membership functions, (vi)weights describing importance of antecedents of rules and weights describing importance of rules, (vii)parameters of adjustable triangular norms, (viii)parameters of soft triangular norms. It should be noted that the crossover and mutation operators are chosen in a self-adaptive way. The method is tested using well known classification benchmarks.

Biodata: Leszek Rutkowski (M’92-SM’94-F’05) received the M. Sc. and Ph. D. degrees in 1977 and 1980, respectively, both from the Technical University of Wroclaw, Poland. Since 1980, he has been with the Technical University of Czestochowa where he is currently a Professor and Chairman of the Computer Engineering Department. From 1987 to 1990 he held a visiting position in the School of Electrical and Computer Engineering at Oklahoma State University. His research interests include neural networks, fuzzy systems, computational intelligence, pattern clasification and expert systems. In May and July 2004 he presented in the IEEE Transaction on Neural Networks a new class of probabilistic neural networks and generalized regression neural networks working in a time-varying environment. He published over 140 technical papers including 17 in various series of IEEE Transactions. He is the author of the books Computational Intelligence published by Springer (2008), New Soft Computing Techniques For System Modeling, Pattern Classification and Image Processing published by Springer (2004), Flexible Neuro-Fuzzy Systems published by Kluwer Academic Publishers (2004), Methods and Techniques of Artificial Intelligence (2005, in Polish), Adaptive Filters and Adaptive Signal Processing (1994, in Polish), and co-author of two others (1997 and 2000, in Polish) Neural Networks, Genetic Algorithms and Fuzzy Systems and Neural Networks for Image Compression. Dr. Leszek Rutkowski is President and Founder of the Polish Neural Networks Society. He organized and served as a General Chair of the International Conferences on Artificial Intelligence and Soft Computing held in 1996, 1997, 1999, 2000, 2002, 2004 and 2006. Dr. Leszek Rutkowski is past Associate Editor of the IEEE Transactions on Neural Networks (1998 – 2005). He is on the editorial board of the International Journal of Applied Mathematics and Computer Science (1996 – present) and International Journal of Information Technology and Intelligent Computing (2006). Dr. Leszek Rutkowski was awarded by the IEEE Fellow Membership Grade for contributions to neurocomputing and flexible fuzzy systems. He is a recipient of the IEEE Transactions on Neural Networks 2005 Outstanding Paper Award. Dr. Leszek Rutkowski currently serves in the IEEE Computational Inteligence Society as the Chair of the Distinguished Lecturer Program. He is the Founding Chair of the Polish Chapter of the IEEE Computational Intelligence Society who won 2008 Outstanding Chapter Award.

Jennie Si

Jennie Si
Department of Electrical Engineering
Arizona State University
USA


Title: Approximate Dynamic Programming for Continuous State and Control Problems

Abstract: Dynamic programming (DP) is an approach to computing the optimal control policy over time under nonlinearity and uncertainty by employing the principle of optimality introduced by Richard Bellman. Instead of enumerating all possible control sequences, dynamic programming only searches admissible state and/or action values that satisfy the principle of optimality. Therefore, the computation complexity can be much improved over the direct enumeration method. However, the computational efforts and the data storage requirement increase exponentially with the dimensionality of the system, which are reflected in the three curses: the state space, the observation space, and the action space. Thus, the traditional DP approach was limited to solving small size problems. Solutions to realistic nonlinear dynamic system optimization and control problems are desired especially those represented by continuous state and control variables. This talk aims at exploring a class of approximate/adaptive dynamic programming algorithms that are suited for such problems. The talk will review these algorithms, their implementations and properties, as well as how to apply them to large, realistic engineering problems such as power systems.

Biodata: Jennie Si received her B.S. and M.S. degrees from Tsinghua University, Beijing, China, and her Ph.D. from the University of Notre Dame. She has been on the faculty in the Department of Electrical Engineering at Arizona State University since 1991. Her research focuses on dynamic optimization using learning and neural network approximation approaches. This entails fundamental understanding of learning and adaptive systems and development of learning algorithms. In addition, she is interested in applications of her systems knowledge in large physical systems such as semiconductor processes and biological neural systems. Recently she set up a neurophysiology lab using chronic multi-channel recording to study the neural mechanism of decision and control in rat’s motor cortical areas. Jennie Si received the NSF/White House Presidential Faculty Fellow Award in 1995. She also received a Motorola Engineering Excellence Award in the same year. She is a Fellow of the IEEE. She is past Associate Editor of the IEEE Trans. on Automatic Control; past Associate Editor of the IEEE Trans. on Semiconductor Manufacturing, and current Associate Editor of the IEEE Trans. on Neural Networks.

Jun Wang

Jun Wang
Department of Mechanical and Automation Engineering
The Chinese University of Hong Kong


Title: Neurodynamic Optimization - Model Designs and Selected Applications

Abstract:

Biodata: Jun Wang received his B.S. and M.S. degrees from Dalian Institute of Technology, and his Ph.D. from Case Western Reserve University. He served as a CAD programmer (part-time) in Zagar, Inc., Cleveland, Ohio from 1989 to 1990. He was an assistant professor from 1990 to 1993, and an associate professor from 1993 to 1997 in University of North Dakota. He was an associate professor from 1995 to 2001, and has been a professor from 2002 in the Department of Mechanical and Automation Engineering at The Chinese University of Hong Kong. His research focuses on neural networks, computational intelligence, intelligent control, intelligent robotics, intelligent manufacturing and Operations research. He is a Fellow of the IEEE. He has been an associate editor of IEEE Transactions on Neural Networks from 1999. He has been an associate editor of IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics from 2003. He was an associate editor of IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and reviews from 2001 to 2005. He has been a member of editorial advisory board of International Journal of Neural Systems from 2006. He served as a guest co-editor of special issue on neural networks and operations research/management science of European Journal of Operational Research, vol. 93, no. 2, 1996. He took various positions at numerous international conferences such as ISNN2004, ISNN2005, IROS2005, ISNN2006, WCCI2006, ICONIP2006, and WCCI2008. He is a past president of the Asia Pacific Neural Network Assembly (APNNA). He is a council member of China Neural Networks Council, Beijing, China. He is a member of several technical committees, such as the Technical Committees on Neural Networks (2004-present) and Emerging Technologies (2004, 2007) of the IEEE Computational Intelligence Society and the Technical Committee on Neural Systems and Applications of the IEEE Circuits and Systems Society (2002-present). He served as a guest professor of Dalian University of Technology, Huazhong University of Science and Technology, and Southeast University, etc.

Donald Wunsch II

Donald C. Wunsch II
Deptartment of Electrical and Computer Engineering
Missouri University of Science & Technology
USA


Title: Clustering

Abstract: Clustering has become an increasingly important topic in recent years, caused by the glut of data from a wide variety of disciplines. However, due to the lack of good communication among these communities, similar theories or algorithms are redeveloped many times, causing unnecessary waste of time and resources. Furthermore, different terminologies confuse practitioners, especially those new to cluster analysis. Clear and comprehensive information in this field is needed. This talk will give a sample of some important clustering algorithms rooted in statistics, computer science, computational intelligence, and machine learning, with an emphasis on the new advances in recent years.

Biodata: Donald Wunsch is from 99 - present, the M.K.. Finley Missouri Distinguished Professor at Missouri University of Science & Technology (formerly called University of Missouri - Rolla). His earlier employers were: Texas Tech, Boeing, Rockwell International, and International Laser Systems. His education includes the Executive MBA from Washington University in St. Louis, the Ph.D. in Electrical Engineering from the University of Washington (Seattle), the M.S., Applied Mathematics (same institution), and the B.S., Applied Mathematics from the University of New Mexico. He has over 275 publications including 63 journal articles and 5 books, has attracted $6 M research funding and produced 13 Ph.D.'s in three disciplines. His key contributions to date were: Adaptive resonance and reinforcement learning hardware and applications; Neurofuzzy regression; Improved Traveling Salesman Problem heuristics; Clustering; and Bioinformatics. He is an IEEE Fellow and served as INNS President 05, INNS Fellow and Senior Fellow 07 - present, and IJCNN General Chair 03, and on the Boards of the IEEE Neural Net Council, the International Neural Networks Society, and the University of Missouri Bioinformatics Consortium.

Gary Yen

Gary G. Yen
School of Electrical and Computer Engineering
Oklahoma State University
USA


Title: Ranked Centroid Projection: A Textual Data Visualization Approach Based on the Self-Organizing Map

Abstract: The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this talk, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

Biodata: Gary G. Yen received the Ph.D. degree in electrical and computer engineering from the University of Notre Dame, Notre Dame, Indiana in 1992. He is currently a Professor in the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, Oklahoma. Before he joined OSU in 1997, he was with the Structure Control Division, U.S. Air Force Research Laboratory in Albuquerque, NM. His research is supported by the DoD, DoE, EPA, NASA, NSF, and Process Industry. His research interest includes intelligent control, computational intelligence, conditional health monitoring, signal processing and their industrial/defense applications. Dr. Yen was an associate editor of the IEEE Transactions on Neural Networks, IEEE Transactions on Systems, Man and Cybernetics, IEEE Transactions on Control Systems Technology, IEEE Control Systems Magazine and Automatica. He is currently serving as an associate editor for the IEEE Transactions on Evolutionary Computation and Mechatronics. He served as the General Chair for the 2003 IEEE International Symposium on Intelligent Control held in Houston, TX and 2006 IEEE World Congress on Computational Intelligence held in Vancouver, Canada. Dr. Yen served as Vice President for the Technical Activities, IEEE Computational Intelligence Society and is the founding editor-in-chief of the IEEE Computational Intelligence Magazine since 2006.

Wen Yu

Wen Yu
Departamento de Control Automático
CINVESTAV-IPN
Mexico


Title: Stable Neural Identification for Unknown Nonlinear Systems

Abstract: Resent results show that neural network techniques seem to be very effective to identify a wide class of complex nonlinear systems when we have no complete model information, or even when we consider the plant as a black box. For engineers it is very important to assure the stability in theory before they want to apply neural identification to real systems. In this talk, several training approaches are given, such as time-varying learning rates, bounding ellipsoid and dead-zone Kalman filter. Passivity and stability properties are discussed for feedforward and recurrent neural networks modeling. An application on modeling of crude oil blending is proposed.

Biodata: Wen Yu received the B.S. degree from Tsinghua University, Beijing, China in 1990 and the M.S. and Ph.D. degrees, both in Electrical Engineering, from Northeastern University, Shenyang, China, in 1992 and 1995, respectively. From 1995 to 1996, he served as a Lecture in the Department of Automatic Control at Northeastern University, Shenyang, China. In 1996, he joined CINVESTAV-IPN, México, where he is a professor in the Departamento de Control Automático. He has held a research position with the Instituto Mexicano del Petróleo, from December 2002 to November 2003. He was a visiting senior research fellow of Queen’s University Belfast from October 2006 to December 2006. He is also visiting professor of Northeastern University in China from 2006 to 2008. He is currently an associate editor of Neurocomputing, and International Journal of Modelling, Identification and Control. He is a senior member of IEEE and member of Mexican Academy of Science. His research interests include adaptive control, neural networks, and fuzzy Control.

Jun Zhang

iwaci2008 Jun Zhang
Department of Psychology
University of Michigan
USA


Title: Kullback-Leibler Divergence, Bregman Divergence, and Their Generalizations

Abstract: A core concept in statistical inference and machine learning is the notion of “divergence function”, which measures (not necessarily symmetric) difference between two probability density functions in infinite-dimensional function space (e.g. Kullback-Leibler divergence), or between two points in finite-dimensional vector space (e.g. Bregman divergence). It is known that KL divergence and Bregman divergence are both “canonical” divergence functions, in the sense that they are uniquely defined for the class of so-called dually-flat manifolds which enjoys a globally affine structure. Various extensions of these divergence functions, including f-divergence, alpha-divergence, Jensen difference, etc, will be discussed. It will be shown that they can all be unified under a convex-analytic framework – they induce a family of statistical manifolds with Fisher information as the Riemannian metric and alpha-connection as the family of affine connections (which includes the dually flat connections as special cases). This theoretical insight hopefully will guide the design and analysis of computational algorithms that involves minimization of risk/cost/error using the use of a divergence function.

Biodata: Dr. Zhang received the B.Sc. degree in Theoretical Physics from Fudan University in 1985, and Ph.D. degree in Neuroscience from University of California, Berkeley in 1992. Supported by a McDonnel-Pew Postdoctoral Fellowship, he worked at the Computation Neuroscience Laboratory headed by Terrance Sejnowski at the Salk Institute. He then joined the faculty of the University of Michigan, Department of Psychology, as Assistant Professor, Associate Professor, and Full Professor starting September 2007. He has also held visiting positions as Senior Lecturer, Associate Professor, and Research Scientist at the University of Melbourne, the University of Waterloo, and RIKEN Brain Science Institute, respectively. In 2003, he was selected to deliver the Winer Memorial Serial Lectures at the Department of Psychology, Purdue University. He has published two dozens of peer-reviewed journal papers in the field of vision, mathematical psychology, cognitive psychology, game theory, voting theory, etc. Dr. Zhang serves as an Associate Editor for the Journal of Mathematical Psychology and is the elected President of the Society for Mathematical Psychology. In the past, he has organized or co-organized several international conferences and workshops in mathematical psychology and information geometry. His research has been funded by the National Science Foundation (NSF) and Air Force Office for Scientific Research (AFOSR).

 
iwaci2008/program.txt · Last modified: 2008/05/31 23:42 (external edit)     Back to top