The Laboratory for Artificial Intelligence is comprised of several groups performing research in different areas of Artificial Intelligence. The LAIR was formed back in the 1970s, and the core researchers from that era form the Cognitive Systems group. The current set of groups are:
There are a number of other groups at OSU conducting research in Artificial Intelligence areas; while not formally part of LAIR they have overlapping interests and are sometimes part of collaborative projects.
LAIR research areas often cut across groups; while each section below describes a general activity it is best to see individulal faculty members' sites for more specific information.
Something here about this image.
Computation Learning Theory is concerned with developing algorithms to allow computers to make decisions and find patterns in data by observing a data (rather than through explicitly specified rules). One line of research focuses on designing and analyzing practical algorithms for machine learning based on non-linear structure of high dimensional data, in particular manifold and spectral methods. Researchers are also interested in a range of theoretical questions concerning the computational and statistical limits of learning and mathematical foundations of learning structure from data. Large data sets also pose a significant problem for machine learning; finding methods for utilizing huge data is an active area of research.
Factor graph of a Conditional Random Field for Articulatory Feature transcription of speech.
Panoramic view of area around Dreese Labs.
Advanced video surveillance systems use computers equipped with video cameras to not only detect the presence of people and track them, but also to identify their activities. The research has broad implications for Homeland Security as well as search and rescue, border patrol, law enforcement and many other types of military applications. The systems combine video cameras with machine learning methods, enabling the computer to perform the kind of visual recognition that seems effortless for humans. This line of research involves technologies from Computer Vision, Visual Perception, Human-Computer Interaction, Motion Capture, and Artificial Intelligence.
Image segmentation using the LEGION neurodynamical model.
The general strategy adopted by this lab is to focus on challenging problems that arise from real-world perception, and then attack them with multidisciplinary approaches. The analysis includes computational, cognitive/perceptual, and neurobiological perspectives. While paying close attention to cognitive and neurobiological processes, the thrust of the work conducted in this lab is computational.
In terms of neurodynamics, we view the brain as a gigantic dynamical system, and we build dynamical systems for solving engineering problems and for understanding neurocomputational mechanisms. To illustrate this strategy, LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) invented by David Terman and DeLiang Wang (Terman & Wang, 1995, Wang & Terman, 1995) builds on neural oscillations in the brain and perceptual organization in human perception. The network shows remarkable computational power in synchronizing a locally coupled oscillator population and desynchronizing different populations. The LEGION network has been applied to image segmentation (see Wang & Terman, 1997) and speech segregation (see Wang & Brown, 1999), among other applications.
Example of diagrammatic reasoning evaluating routes of tanks in a sensor field.
The LAIR research team has concentrated its efforts on creating theories about intelligence, in addition to building socially, technologically useful tools which embody these theories. We have been analyzing diagnostic and design tasks, which in turn, have led us to a study of how causal processes are understood and used. We have also begun research in how visual representations are used during problem solving. All this research is done in various real-world domains; engineering and medicine provide us with most of our challenges.
A recent focus has been on issues related to diagrammatic reasoning. We have been experimenting with use of our Seeker-Filter-Viewer architecture for multi-criterial decision-making, specifically for Course of Action planning and as a data mining tool to understand a decision space.
Abduction or Inference to the Best Explanation is a form of inference that follows a pattern like this:
D is a collection of data (facts, observations, givens),
H explains D (would, if true, explain D),
No other hypothesis explains D as well as H does.
Therefore, H is probably correct.
The strength of an abductive conclusion will in general depend on several factors, including:
That the strength of abductive conclusion ``will in general'' depend on these factors means that it should depend on these factors, and that insofar as we are intelligent creatures, our conclusions will actually depend on these factors.
Time-frequency representation of speech extracted from a noisy environment.
Human listeners are able to perceptually segregate one sound source from an acoustic mixture, such as a single voice from a mixture of other voices and music at a busy cocktail party. How can we engineer "machine listening" systems that achieve this perceptual feat?
Albert Bregman's book Auditory Scene Analysis, published in 1990, draws an analogy between the perception of auditory scenes and visual scenes, and describes a coherent framework for understanding the perceptual organization of sound. His account has stimulated much interest in computational studies of hearing. Such studies are motivated in part by the demand for practical sound separation systems, which have many applications including noise-robust automatic speech recognition, hearing prostheses, and automatic music transcription. This emerging field has become known as computational auditory scene analysis (CASA).
[Excerpted from the website for DeLiang Wang and Guy Brown's book on CASA, Computational Auditory Scene Anaylsis: Principles, Algorithms, and Applications.]