** This seminar is sponsored jointly by Computational Linguistics
Colloqium Series and Logic and AI Seminar Series. **

September 25, 2001
11:30pm, AVW Room 3258

Title: Grounding Knowledge in Sensors: Unsupervised Learning for Language
and Planning
Speaker: Tim Oates, UMBC

The physical world and the language that we use to describe it are full of
structure. Very young children discover this structure with apparent
ease. They somehow transform sensory information gathered while exploring their
environment into knowledge that enables both successful planning and natural
language communication, two of the defining characteristics of human
intelligence. The goal of my research is to understand how robots can
autonomously discover similarly useful structure in their sensor data. In
this talk I will describe a single computational model that accounts for the
unsupervised discovery of both the fundamental units of natural languages -
words and their meanings - and the fundamental units of plans - actions and
their effects.

At the core of the model is an algorithm called PERUSE that discovers
recurring patterns in real-valued, multivariate time series. Given a set of
time series containing acoustic data from spoken utterances, PERUSE
discovers patterns that correspond to recurring words. Once the robot
discovers words, a second algorithm makes it possible for the denotations of
words to be learned from non-auditory sensor data about the robot's
environment. The end result is a set of word/meaning pairs that allow the
robot to make probabilistic judgments about the referents of words that it
hears and about the chances of communicative success when using a word to
describe its environment. When these algorithms are applied to sensor data
collected while taking actions, the patterns discovered by PERUSE represent
possible effects. The end result is a set of action/effect pairs that allow
the robot to make probabilistic predictions about the results of taking
actions from particular regions of continuous action spaces.

I will describe two sets of experiments. In the first, human subjects played
with blocks in front of a robot and generated unrestricted natural language
utterances to describe the blocks and their configurations. The system
successfully discovered words and their denotations in three different
languages - English, German and Mandarin Chinese. In the second experiment
the robot sampled actions randomly from a continuous action space that
determined its path of motion with respect to objects in its environment.
The robot successfully discovered qualitatively distinct interactions as
well as regions in its action space that reliably led to the various
interactions.

------------------------------------------------------------------------
For more info about LAISEM and upcoming events please check
http://www.umiacs.umd.edu/seminars/laisem.htm
------------------------------------------------------------------------